pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T10:19:15+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the `llm_eval` library and list the results below:
| **Repository (Qwen Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------|:------------:|:------------:|:-----------:|:-------------:|:-------------:|:-----------:|:----------:|:-----------:|:-----------:|:-------------:|:-------------:|:-------------:|:---------:|
| `Qwen-1.5-0.5B-layer-mix-bpw-2.2` | 0.398 | 0.170 | 0.443 | 0.527 | 0.332 | 0.238 | 0.634 | 0.620 | 0.318 | 0.332 | 0.338 | 0.330 | 0.500 |
| `Qwen-1.5-0.5B-layer-mix-bpw-2.5` | 0.394 | 0.170 | 0.514 | 0.541 | 0.337 | 0.232 | 0.637 | 0.496 | 0.318 | 0.316 | 0.358 | 0.326 | 0.490 |
| `Qwen-1.5-0.5B-layer-mix-bpw-3.0` | 0.407 | 0.198 | 0.533 | 0.536 | 0.348 | 0.234 | 0.671 | 0.552 | 0.323 | 0.330 | 0.333 | 0.335 | 0.495 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.2` | 0.415 | 0.218 | 0.539 | 0.586 | 0.392 | 0.260 | 0.678 | 0.622 | 0.333 | 0.333 | 0.333 | 0.336 | 0.464 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.5` | 0.423 | 0.222 | 0.592 | 0.585 | 0.406 | 0.267 | 0.695 | 0.629 | 0.336 | 0.314 | 0.339 | 0.361 | 0.507 |
| `Qwen-1.5-1.8B-layer-mix-bpw-3.0` | 0.438 | 0.246 | 0.576 | 0.563 | 0.413 | 0.277 | 0.694 | 0.645 | 0.352 | 0.323 | 0.336 | 0.343 | 0.492 |
| `Qwen-1.5-4B-layer-mix-bpw-2.2` | 0.480 | 0.254 | 0.663 | 0.623 | 0.463 | 0.339 | 0.712 | 0.718 | 0.349 | 0.326 | 0.355 | 0.384 | 0.513 |
| `Qwen-1.5-4B-layer-mix-bpw-2.5` | 0.490 | 0.266 | 0.677 | 0.629 | 0.473 | 0.365 | 0.732 | 0.717 | 0.351 | 0.372 | 0.352 | 0.360 | 0.502 |
| `Qwen-1.5-4B-layer-mix-bpw-3.0` | 0.502 | 0.268 | 0.678 | 0.642 | 0.494 | 0.358 | 0.755 | 0.757 | 0.380 | 0.395 | 0.395 | 0.392 | 0.519 |
| `Qwen-1.5-7B-layer-mix-bpw-2.2` | 0.513 | 0.278 | 0.669 | 0.654 | 0.504 | 0.389 | 0.741 | 0.759 | 0.376 | 0.383 | 0.410 | 0.403 | 0.517 |
| `Qwen-1.5-7B-layer-mix-bpw-2.5` | 0.520 | 0.294 | 0.705 | 0.650 | 0.520 | 0.387 | 0.750 | 0.769 | 0.371 | 0.445 | 0.424 | 0.398 | 0.564 |
| `Qwen-1.5-7B-layer-mix-bpw-3.0` | 0.531 | 0.292 | 0.713 | 0.654 | 0.545 | 0.405 | 0.764 | 0.807 | 0.383 | 0.424 | 0.393 | 0.414 | 0.627 |
| `Qwen-1.5-14B-layer-mix-bpw-2.5` | 0.553 | 0.318 | 0.727 | 0.682 | 0.564 | 0.413 | 0.775 | 0.792 | 0.390 | 0.472 | 0.434 | 0.446 | 0.623 |
| `Qwen-1.5-32B-layer-mix-bpw-3.0` | 0.599 | 0.346 | 0.775 | 0.722 | 0.620 | 0.492 | 0.807 | 0.853 | 0.444 | 0.515 | 0.494 | 0.478 | 0.642 |
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T10:19:23+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GreenBit LLMs
=============
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\_eval' library and list the results below:
|
[
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tomaszki/stablelm-30
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T10:21:25+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tomaszki/stablelm-30-a
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T10:22:59+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"license": "apache-2.0", "tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T10:25:29+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #license-apache-2.0 #region-us
|
# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #license-apache-2.0 #region-us \n",
"# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the `llm_eval` library and list the results below:
| **Repository (Qwen Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------|:------------:|:------------:|:-----------:|:-------------:|:-------------:|:-----------:|:----------:|:-----------:|:-----------:|:-------------:|:-------------:|:-------------:|:---------:|
| `Qwen-1.5-0.5B-layer-mix-bpw-2.2` | 0.398 | 0.170 | 0.443 | 0.527 | 0.332 | 0.238 | 0.634 | 0.620 | 0.318 | 0.332 | 0.338 | 0.330 | 0.500 |
| `Qwen-1.5-0.5B-layer-mix-bpw-2.5` | 0.394 | 0.170 | 0.514 | 0.541 | 0.337 | 0.232 | 0.637 | 0.496 | 0.318 | 0.316 | 0.358 | 0.326 | 0.490 |
| `Qwen-1.5-0.5B-layer-mix-bpw-3.0` | 0.407 | 0.198 | 0.533 | 0.536 | 0.348 | 0.234 | 0.671 | 0.552 | 0.323 | 0.330 | 0.333 | 0.335 | 0.495 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.2` | 0.415 | 0.218 | 0.539 | 0.586 | 0.392 | 0.260 | 0.678 | 0.622 | 0.333 | 0.333 | 0.333 | 0.336 | 0.464 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.5` | 0.423 | 0.222 | 0.592 | 0.585 | 0.406 | 0.267 | 0.695 | 0.629 | 0.336 | 0.314 | 0.339 | 0.361 | 0.507 |
| `Qwen-1.5-1.8B-layer-mix-bpw-3.0` | 0.438 | 0.246 | 0.576 | 0.563 | 0.413 | 0.277 | 0.694 | 0.645 | 0.352 | 0.323 | 0.336 | 0.343 | 0.492 |
| `Qwen-1.5-4B-layer-mix-bpw-2.2` | 0.480 | 0.254 | 0.663 | 0.623 | 0.463 | 0.339 | 0.712 | 0.718 | 0.349 | 0.326 | 0.355 | 0.384 | 0.513 |
| `Qwen-1.5-4B-layer-mix-bpw-2.5` | 0.490 | 0.266 | 0.677 | 0.629 | 0.473 | 0.365 | 0.732 | 0.717 | 0.351 | 0.372 | 0.352 | 0.360 | 0.502 |
| `Qwen-1.5-4B-layer-mix-bpw-3.0` | 0.502 | 0.268 | 0.678 | 0.642 | 0.494 | 0.358 | 0.755 | 0.757 | 0.380 | 0.395 | 0.395 | 0.392 | 0.519 |
| `Qwen-1.5-7B-layer-mix-bpw-2.2` | 0.513 | 0.278 | 0.669 | 0.654 | 0.504 | 0.389 | 0.741 | 0.759 | 0.376 | 0.383 | 0.410 | 0.403 | 0.517 |
| `Qwen-1.5-7B-layer-mix-bpw-2.5` | 0.520 | 0.294 | 0.705 | 0.650 | 0.520 | 0.387 | 0.750 | 0.769 | 0.371 | 0.445 | 0.424 | 0.398 | 0.564 |
| `Qwen-1.5-7B-layer-mix-bpw-3.0` | 0.531 | 0.292 | 0.713 | 0.654 | 0.545 | 0.405 | 0.764 | 0.807 | 0.383 | 0.424 | 0.393 | 0.414 | 0.627 |
| `Qwen-1.5-14B-layer-mix-bpw-2.5` | 0.553 | 0.318 | 0.727 | 0.682 | 0.564 | 0.413 | 0.775 | 0.792 | 0.390 | 0.472 | 0.434 | 0.446 | 0.623 |
| `Qwen-1.5-32B-layer-mix-bpw-3.0` | 0.599 | 0.346 | 0.775 | 0.722 | 0.620 | 0.492 | 0.807 | 0.853 | 0.444 | 0.515 | 0.494 | 0.478 | 0.642 |
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-14B-Chat-layer-mix-bpw-3.0
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T10:26:34+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GreenBit LLMs
=============
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\_eval' library and list the results below:
|
[
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the `llm_eval` library and list the results below:
| **Repository (Qwen Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------|:------------:|:------------:|:-----------:|:-------------:|:-------------:|:-----------:|:----------:|:-----------:|:-----------:|:-------------:|:-------------:|:-------------:|:---------:|
| `Qwen-1.5-0.5B-layer-mix-bpw-2.2` | 0.398 | 0.170 | 0.443 | 0.527 | 0.332 | 0.238 | 0.634 | 0.620 | 0.318 | 0.332 | 0.338 | 0.330 | 0.500 |
| `Qwen-1.5-0.5B-layer-mix-bpw-2.5` | 0.394 | 0.170 | 0.514 | 0.541 | 0.337 | 0.232 | 0.637 | 0.496 | 0.318 | 0.316 | 0.358 | 0.326 | 0.490 |
| `Qwen-1.5-0.5B-layer-mix-bpw-3.0` | 0.407 | 0.198 | 0.533 | 0.536 | 0.348 | 0.234 | 0.671 | 0.552 | 0.323 | 0.330 | 0.333 | 0.335 | 0.495 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.2` | 0.415 | 0.218 | 0.539 | 0.586 | 0.392 | 0.260 | 0.678 | 0.622 | 0.333 | 0.333 | 0.333 | 0.336 | 0.464 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.5` | 0.423 | 0.222 | 0.592 | 0.585 | 0.406 | 0.267 | 0.695 | 0.629 | 0.336 | 0.314 | 0.339 | 0.361 | 0.507 |
| `Qwen-1.5-1.8B-layer-mix-bpw-3.0` | 0.438 | 0.246 | 0.576 | 0.563 | 0.413 | 0.277 | 0.694 | 0.645 | 0.352 | 0.323 | 0.336 | 0.343 | 0.492 |
| `Qwen-1.5-4B-layer-mix-bpw-2.2` | 0.480 | 0.254 | 0.663 | 0.623 | 0.463 | 0.339 | 0.712 | 0.718 | 0.349 | 0.326 | 0.355 | 0.384 | 0.513 |
| `Qwen-1.5-4B-layer-mix-bpw-2.5` | 0.490 | 0.266 | 0.677 | 0.629 | 0.473 | 0.365 | 0.732 | 0.717 | 0.351 | 0.372 | 0.352 | 0.360 | 0.502 |
| `Qwen-1.5-4B-layer-mix-bpw-3.0` | 0.502 | 0.268 | 0.678 | 0.642 | 0.494 | 0.358 | 0.755 | 0.757 | 0.380 | 0.395 | 0.395 | 0.392 | 0.519 |
| `Qwen-1.5-7B-layer-mix-bpw-2.2` | 0.513 | 0.278 | 0.669 | 0.654 | 0.504 | 0.389 | 0.741 | 0.759 | 0.376 | 0.383 | 0.410 | 0.403 | 0.517 |
| `Qwen-1.5-7B-layer-mix-bpw-2.5` | 0.520 | 0.294 | 0.705 | 0.650 | 0.520 | 0.387 | 0.750 | 0.769 | 0.371 | 0.445 | 0.424 | 0.398 | 0.564 |
| `Qwen-1.5-7B-layer-mix-bpw-3.0` | 0.531 | 0.292 | 0.713 | 0.654 | 0.545 | 0.405 | 0.764 | 0.807 | 0.383 | 0.424 | 0.393 | 0.414 | 0.627 |
| `Qwen-1.5-14B-layer-mix-bpw-2.5` | 0.553 | 0.318 | 0.727 | 0.682 | 0.564 | 0.413 | 0.775 | 0.792 | 0.390 | 0.472 | 0.434 | 0.446 | 0.623 |
| `Qwen-1.5-32B-layer-mix-bpw-3.0` | 0.599 | 0.346 | 0.775 | 0.722 | 0.620 | 0.492 | 0.807 | 0.853 | 0.444 | 0.515 | 0.494 | 0.478 | 0.642 |
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-14B-Chat-layer-mix-bpw-2.5
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T10:26:44+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GreenBit LLMs
=============
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\_eval' library and list the results below:
|
[
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the `llm_eval` library and list the results below:
| **Repository (Qwen Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------|:------------:|:------------:|:-----------:|:-------------:|:-------------:|:-----------:|:----------:|:-----------:|:-----------:|:-------------:|:-------------:|:-------------:|:---------:|
| `Qwen-1.5-0.5B-layer-mix-bpw-2.2` | 0.398 | 0.170 | 0.443 | 0.527 | 0.332 | 0.238 | 0.634 | 0.620 | 0.318 | 0.332 | 0.338 | 0.330 | 0.500 |
| `Qwen-1.5-0.5B-layer-mix-bpw-2.5` | 0.394 | 0.170 | 0.514 | 0.541 | 0.337 | 0.232 | 0.637 | 0.496 | 0.318 | 0.316 | 0.358 | 0.326 | 0.490 |
| `Qwen-1.5-0.5B-layer-mix-bpw-3.0` | 0.407 | 0.198 | 0.533 | 0.536 | 0.348 | 0.234 | 0.671 | 0.552 | 0.323 | 0.330 | 0.333 | 0.335 | 0.495 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.2` | 0.415 | 0.218 | 0.539 | 0.586 | 0.392 | 0.260 | 0.678 | 0.622 | 0.333 | 0.333 | 0.333 | 0.336 | 0.464 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.5` | 0.423 | 0.222 | 0.592 | 0.585 | 0.406 | 0.267 | 0.695 | 0.629 | 0.336 | 0.314 | 0.339 | 0.361 | 0.507 |
| `Qwen-1.5-1.8B-layer-mix-bpw-3.0` | 0.438 | 0.246 | 0.576 | 0.563 | 0.413 | 0.277 | 0.694 | 0.645 | 0.352 | 0.323 | 0.336 | 0.343 | 0.492 |
| `Qwen-1.5-4B-layer-mix-bpw-2.2` | 0.480 | 0.254 | 0.663 | 0.623 | 0.463 | 0.339 | 0.712 | 0.718 | 0.349 | 0.326 | 0.355 | 0.384 | 0.513 |
| `Qwen-1.5-4B-layer-mix-bpw-2.5` | 0.490 | 0.266 | 0.677 | 0.629 | 0.473 | 0.365 | 0.732 | 0.717 | 0.351 | 0.372 | 0.352 | 0.360 | 0.502 |
| `Qwen-1.5-4B-layer-mix-bpw-3.0` | 0.502 | 0.268 | 0.678 | 0.642 | 0.494 | 0.358 | 0.755 | 0.757 | 0.380 | 0.395 | 0.395 | 0.392 | 0.519 |
| `Qwen-1.5-7B-layer-mix-bpw-2.2` | 0.513 | 0.278 | 0.669 | 0.654 | 0.504 | 0.389 | 0.741 | 0.759 | 0.376 | 0.383 | 0.410 | 0.403 | 0.517 |
| `Qwen-1.5-7B-layer-mix-bpw-2.5` | 0.520 | 0.294 | 0.705 | 0.650 | 0.520 | 0.387 | 0.750 | 0.769 | 0.371 | 0.445 | 0.424 | 0.398 | 0.564 |
| `Qwen-1.5-7B-layer-mix-bpw-3.0` | 0.531 | 0.292 | 0.713 | 0.654 | 0.545 | 0.405 | 0.764 | 0.807 | 0.383 | 0.424 | 0.393 | 0.414 | 0.627 |
| `Qwen-1.5-14B-layer-mix-bpw-2.5` | 0.553 | 0.318 | 0.727 | 0.682 | 0.564 | 0.413 | 0.775 | 0.792 | 0.390 | 0.472 | 0.434 | 0.446 | 0.623 |
| `Qwen-1.5-32B-layer-mix-bpw-3.0` | 0.599 | 0.346 | 0.775 | 0.722 | 0.620 | 0.492 | 0.807 | 0.853 | 0.444 | 0.515 | 0.494 | 0.478 | 0.642 |
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-14B-Chat-layer-mix-bpw-2.2
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T10:26:52+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GreenBit LLMs
=============
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\_eval' library and list the results below:
|
[
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased_classification_finetuned_news_all_adptive
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5638
- F1: 0.8525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3636 | 1.0 | 2384 | 0.3592 | 0.8071 |
| 0.3492 | 2.0 | 4768 | 0.3220 | 0.8233 |
| 0.3052 | 3.0 | 7152 | 0.3036 | 0.8378 |
| 0.2627 | 4.0 | 9536 | 0.3304 | 0.8404 |
| 0.1948 | 5.0 | 11920 | 0.3458 | 0.8434 |
| 0.1536 | 6.0 | 14304 | 0.4380 | 0.8524 |
| 0.164 | 7.0 | 16688 | 0.5638 | 0.8525 |
| 0.136 | 8.0 | 19072 | 0.7250 | 0.8503 |
| 0.076 | 9.0 | 21456 | 0.8166 | 0.8510 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "distilbert-base-multilingual-cased_classification_finetuned_news_all_adptive", "results": []}]}
|
Mou11209203/distilbert-base-multilingual-cased_classification_finetuned_news_all_adptive
| null |
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T10:27:21+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-multilingual-cased\_classification\_finetuned\_news\_all\_adptive
=================================================================================
This model is a fine-tuned version of distilbert/distilbert-base-multilingual-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5638
* F1: 0.8525
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Ghost 7B Alpha
<img src="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/QPrQZMQX_jzyYngmreP0_.jpeg" alt="Ghost 7B Alpha Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
The large generation of language models focuses on optimizing excellent reasoning, multi-task knowledge, and tools support.
# Introduction
**Ghost 7B Alpha** is a large language model fine-tuned from Mistral 7B, with a size of 7 billion parameters. The model was developed with the goal of optimizing reasoning ability, multi-task knowledge and supporting tool usage. The model works well with the main trained and optimized languages being English and Vietnamese.
Overall, the model is suitable when making a pretrained version so you can continue to develop the desired tasks, develop virtual assistants, perform features on tasks such as coding, translation, answering questions, creating documents, etc. It is truly an efficient, fast and extremely cheap open model.
## Specifications
- Name: **Ghost 7B Alpha**.
- Model size: 7 billion parameters.
- Context length: 8K, 8192.
- Languages: English and Vietnamese.
- Main tasks: reasoning, multi-tasking knowledge and function tools.
- License: [Ghost 7B LICENSE AGREEMENT](https://ghost-x.org/ghost-7b-license).
- Based on: Mistral 7B.
- Distributions: Standard (BF16), GGUF, AWQ.
- Developed by: **Ghost X**, [Hieu Lam](https://huggingface.co/lamhieu).
### Links
- Card model: [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha).
- Official website: [Ghost 7B Alpha](https://ghost-x.org/docs/models/ghost-7b-alpha/).
- Demo: [Playground with Ghost 7B Alpha](https://ghost-x.org/docs/notebooks/playground-with-ghost-7b-alpha/).
## Distributions
We create many distributions to give you the best access options that best suit your needs. Always make sure you know which version you need and what will help you operate better.
| Version | Model card |
| ------- | -------------------------------------------------------------------- |
| BF16 | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha) |
| GGUF | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-gguf) |
| AWQ | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-awq) |
### Standard (BF16)
The standard distribution was used to run the assessments and was found to have the best performance in text generation quality.
### GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
### AWQ
The AWQ algorithm for 4-bit quantization with a 2x speedup during inference.
## Prompt / Chat template
### Roles
Overall, the model supports roles including: **system**, **plugins**, **user**, **assistant**, **execute**, **response**.
- **system**: Similar to OpenAI GPT4, this will be the place for you to declare context or specify characteristics and expertise for the model.
- **user**: This is user input, which can be a question or an instruction for the model to follow.
- **assistant**: The model's response to a question or instruction received from **user**.
- **plugins**: Declare information about modeling tools that are allowed to be used according to predetermined standards.
- **excute**: Mock iterates over the data type returned by the model for execution. Contains information about tools to use and arguments.
- **response**: The result after executing the tool, so that the model gives the appropriate answer according to the user's input.
See the actual code [here](https://huggingface.co/ghost-x/ghost-7b-alpha/blob/main/tokenizer_config.json#L36).
### Tips
- Setting the system prompt in the right context and characteristics will help the model improve performance quite well.
- You can set system prompt to prevent unwanted things.
- In most cases, if you have no idea you can leave the system prompt blank. At this point, the performance will be best for general cases.
### Chat template
#### Default chat
In the standard case, without using function tools, only use roles such as **system**, **user** and **assistant**. Conventions are in the following order:
- (1) **system**
- (2) **user**
- (3) **assistant**
Repeat (2) and (3) continuously throughout the conversation.
#### Chat with tools
In case of using tools, the conventions are in the following order:
- (1) **system**
- (2) **plugins**
- (3) **user**
- A. When the model finds it inappropriate to use the tool:
- (4) **assistant** if the model can answer or is not suitable for the tool. The model proceeds to answer the user's question.
- B. When the model chooses to use the tool:
- (4) **execute** returns the arguments needed to execute the specified tool.
- (5) **response** returns the results after applying the tool to the model.
- (6) **assistant** the model proceeds to answer the user's question.
Repeat (3) and (A) or (B) continuously throughout the conversation.
# Showcases
## English
Let's try a question taken from [OpenAI GPT-4](https://openai.com/gpt-4) to evaluate the superiority of GPT-4 compared to ChatGPT.
<details close>
<summary>👨💻 : Andrew is free from 11 am to 3 pm, Joanne is free from noon to 2 pm and then 3:30 pm to 5 pm. Hannah is available at noon for half an hour, and then 4 pm to 6 pm. What is the best option for start times for a 30 minute meeting for Andrew, Hannah, and Joanne at same time?</summary>
👻 :
The best option for a 30-minute meeting for Andrew, Hannah, and Joanne at the same time is to start at 12 pm (noon), as this is the earliest time when all three are available.
Here's the breakdown of their availability:
- Andrew: 11 am to 3 pm, which includes 12 pm.
- Joanne: Noon to 2 pm, which includes 12 pm.
- Hannah: Noon for half an hour (30 minutes), which includes 12 pm. Then she is available from 4 pm to 6 pm.
Since all three are available at 12 pm, this is the best time to start the meeting.
</details>
A quiz to see if the model can guess what it's talking about?!
<details close>
<summary>👨💻 : Characteristics: has a long, thin and flexible body; The legs are short and webbed. Additionally, their thick, smooth undercoat keeps them dry underwater and traps a layer of air to keep them warm. Question: What animal is that?</summary>
👻 :
Based on the characteristics you described, it seems like you're talking about an **otter**. Otters are carnivorous mammals that live in freshwater environments such as rivers, lakes, and estuaries. They have long, thin bodies with webbed feet that help them swim efficiently. Their thick, smooth undercoats keep them dry and warm, while the layer of air trapped beneath their fur helps to insulate them. Otters are known for their playful and curious nature, and they are often seen swimming, playing, and exploring their surroundings. 🦡🐭
</details>
To this question, I really cannot answer. But the model's evidence is quite strict. Also, what do you think about this answer?
<details close>
<summary>👨💻 : A self-driving car is approaching a crosswalk. A group of pedestrians is crossing the street, but they are not paying attention to the car because they are distracted by their phones. The car's brakes are malfunctioning, and it is unable to stop. Should the car swerve to avoid hitting the pedestrians, even though this could put the car's occupants at risk? Be convincing with your examples or sharp reasoning.</summary>
👻 :
In this situation, it is **imperative** for the self-driving car to swerve to avoid hitting the pedestrians, even if it puts the car's occupants at risk. Here's why:
**1. Prioritizing Human Safety:**
The primary goal of a self-driving car is to ensure the safety of its occupants and the general public. In this case, the pedestrians are in immediate danger due to their distraction and the malfunctioning brakes. Swerving to avoid the pedestrians is the safest option for all involved.
**2. Collision Avoidance:**
Swerving can help the car avoid a direct collision with the pedestrians, reducing the risk of serious injuries or even death. Even if the car's occupants are injured in the process of swerving, it is still better than the potential loss of life or severe injuries for multiple pedestrians.
**3. Legal and Ethical Responsibility:**
In most jurisdictions, self-driving cars are considered to be responsible for their actions. Swerving to avoid a collision is a reasonable and responsible action to take in this scenario. Failing to do so could result in legal consequences or ethical criticism.
**4. Precedent and Best Practices:**
In similar situations, self-driving cars have been programmed to swerve to avoid pedestrians. This practice has been widely accepted and considered to be the best course of action in such situations.
**5. Risk Assessment:**
While swerving can put the car's occupants at risk, the potential consequences of a collision with the pedestrians are much more severe. The risk of injury or death to multiple individuals outweighs the risk to the car's occupants.
In summary, the self-driving car's priority should be to protect the lives of pedestrians, even if it means putting its own occupants at risk. Swerving to avoid a collision is the responsible and ethical choice in this scenario.
</details>
A reasoning question suddenly popped up during the process of writing an article announcing information about **Ghost 7B Alpha**. The model gave an impressive answer, at least to its creator.
<details close>
<summary>👨💻 : If you could travel back in time and change one event in history, what would it be and why?</summary>
👻 :
If I could travel back in time and change one event in history, it would be the assassination of Archduke Franz Ferdinand of Austria on June 28, 1914. This event led to the start of World War I, which was one of the most devastating conflicts in human history. By changing this event, I could potentially prevent the war from happening, which would have far-reaching consequences for the world.
Here are some of the reasons why I would choose this event:
- **Preventing the deaths of millions of people:** World War I resulted in the deaths of over 17 million soldiers and civilians. Changing this event would save countless lives.
- **Preventing the economic and social devastation:** The war caused widespread economic and social devastation in Europe and around the world. By preventing the war, I could help to avoid these consequences.
- **Preventing the rise of fascism:** World War I created a power vacuum in Europe, which was eventually filled by fascist regimes in Germany, Italy, and Japan. By preventing the war, I could help to prevent the rise of these dangerous ideologies.
- **Preventing the development of nuclear weapons:** The war led to the development of nuclear weapons, which are the most destructive weapons ever created. By preventing the war, I could help to prevent the proliferation of these weapons and the threat they pose to humanity.
</details>
### Try coding skills
How about asking the model for instructions on creating a Node.js application using Docker for deployment?
<details close>
<summary>👨💻 : Create a Nodejs project using Express as a server and Docker with a Dockerfile for deployment with step-by-step instructions with detailed explanations.</summary>
👻 :
**Step 1: Create a new Node.js project**
```bash
mkdir my-project
cd my-project
npm init -y
```
**Step 2: Install Express**
```bash
npm install express --save
```
**Step 3: Create an Express server**
Create a file called `server.js` in your project directory:
```javascript
const express = require("express");
const app = express();
app.get("/", (req, res) => {
res.send("Hello World!");
});
app.listen(3000, () => {
console.log("Server listening on port 3000");
});
```
**Step 4: Create a Dockerfile**
Create a file called `Dockerfile` in your project directory:
```bash
FROM node:16
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
```
**Step 5: Build the Docker image**
```bash
docker build -t my-project-image .
```
**Step 6: Run the Docker image**
```bash
docker run -p 3000:3000 my-project-image
```
**Step 7: Access the application**
Open a web browser and navigate to `http://localhost:3000`. You should see the message "Hello World!" displayed.
**Explanation:**
- **Step 1:** Creates a new Node.js project with `npm init -y` to initialize a new Node.js project with a default `package.json` file.
- **Step 2:** Installs Express using `npm install express --save`. This command installs Express and adds it to the `package.json` file.
- **Step 3:** Creates an Express server in `server.js` that responds with "Hello World!" when a request is made to the root path (`/`).
- **Step 4:** Creates a Dockerfile that defines the image for the project. The Dockerfile specifies that the image should use Node.js version 16, work from the `/usr/src/app` directory, copy the `package.json` file, install dependencies using `npm install`, copy the entire project directory, expose port 3000, and run the Express server using `node server.js`.
- **Step 5:** Builds the Docker image using `docker build -t my-project-image .`. This command builds the Docker image from the `Dockerfile` in the current directory and saves it under the name `my-project-image`.
- **Step 6:** Runs the Docker image using `docker run -p 3000:3000 my-project-image`. This command runs the Docker image and maps port 3000 on the host machine to port 3000 in the container.
- **Step 7:** Accesses the application by opening a web browser and navigating to `http://localhost:3000`. This will display the message "Hello World!" because the Express server is running in the Docker container.
</details>
Now let’s try asking for a coding model to build a smooth neural network with python.
<details close>
<summary>👨💻 : Explain how to use the Keras or TensorFlow library in Python to build and train a multi-layer neural network for an image classification task</summary>
👻 :
**Using Keras:**
1. Import the necessary libraries:
```python
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
```
2. Create a sequential model:
```python
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(10, activation='softmax'))
```
3. Compile the model:
```python
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
```
4. Train the model:
```python
model.fit(x_train, y_train, epochs=10)
```
**Using TensorFlow:**
1. Import the necessary libraries:
```python
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
```
2. Create a sequential model:
```python
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(10, activation='softmax'))
```
3. Compile the model:
```python
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
```
4. Train the model:
```python
model.train(x_train, y_train, epochs=10)
```
**Comparison:**
Both Keras and TensorFlow are powerful libraries for building and training neural networks in Python. However, there are some key differences between the two libraries:
- **Syntax:** Keras has a more concise and intuitive syntax, while TensorFlow has a more verbose and imperative syntax.
- **Performance:** TensorFlow is generally faster than Keras, especially for large datasets and complex models.
- **Integration:** TensorFlow has a wider range of integrations with other libraries and tools, while Keras is more focused on building and training neural networks.
Ultimately, the best choice for you depends on your specific needs and preferences. If you prefer a more concise and intuitive syntax, Keras is a good option. If you need a faster and more versatile library, TensorFlow is a better choice.
</details>
Let's try asking the model how to overcome the "hallucination" phenomenon in A.I development and give context for the model to provide a solution.
<details close>
<summary>👨💻 : Explain the cause of A.I hallucination and find a solution to overcome it, the specific context is LLMs.</summary>
👻 :
**Causes of A.I. Hallucination:**
- **Data Quality:** Incorrect or biased data can lead to hallucinations in LLMs. For example, if an LLM is trained on a dataset that contains biased or incomplete information, it may learn to generate biased or incomplete responses.
- **Model Architecture:** Certain model architectures are more susceptible to hallucinations than others. For example, Transformers, which are commonly used in LLMs, can be more prone to hallucinations than recurrent neural networks (RNNs).
- **Hyperparameters:** The hyperparameters of an LLM, such as the learning rate and the batch size, can also affect the likelihood of hallucinations. For example, a higher learning rate may lead to hallucinations more often than a lower learning rate.
**Solutions to Overcome A.I. Hallucination:**
- **Data Quality:** Ensure that the data used to train the LLM is accurate, complete, and unbiased. This can be done by cleaning the data, removing outliers, and using a variety of data sources.
- **Model Architecture:** Choose an LLM architecture that is less susceptible to hallucinations. For example, RNNs are generally less prone to hallucinations than Transformers.
- **Hyperparameters:** Tune the hyperparameters of the LLM to minimize the likelihood of hallucinations. This can be done by experimenting with different learning rates, batch sizes, and other hyperparameters.
- **Regularization Techniques:** Regularization techniques, such as dropout and early stopping, can help to prevent hallucinations by encouraging the LLM to learn more generalizable patterns.
- **Explicit Hallucination Detection:** Develop techniques to explicitly detect hallucinations in LLM-generated text. This can be done by using a variety of methods, such as text analysis, natural language processing, and machine learning.
- **User Feedback:** Collect feedback from users to identify and correct hallucinations in LLM-generated text. This can be done by using a variety of methods, such as surveys, feedback forms, and online discussions.
By implementing these solutions, you can significantly reduce the likelihood of hallucinations in LLMs and improve the quality of the generated text.
</details>
## Vietnamese
Try asking the model what it thinks about a famous quote to see if the answer feels natural.
<details close>
<summary>👨💻 : Bạn có đồng ý với câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo" ?</summary>
👻 :
Tôi đồng ý với câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo". Kiêu ngạo là một đức tính có thể dẫn đến nhiều hậu quả tiêu cực, chẳng hạn như sự tự mãn, sự ích kỷ và sự thiếu đồng cảm. Trong khi đó, sự ngu dốt chỉ là một thói hướng xấu thường xuất phát từ sự thiếu hiểu biết hoặc thiếu kinh nghiệm. Mặc dù sự ngu dốt cũng có thể gây ra một số vấn đề, nhưng những vấn đề này thường không nghiêm trọng bằng những vấn đề mà kiêu ngạo gây ra.
Ví dụ, một người ngu dốt có thể nói những lời ngớ ngẩn hoặc làm những việc vụng về, nhưng họ thường vẫn có thể học hỏi từ những sai lầm của mình và cải thiện bản thân. Ngược lại, một người kiêu ngạo thường sẽ không chịu học hỏi từ những sai lầm của mình và luôn tin rằng mình đúng, ngay cả khi mình sai. Điều này có thể dẫn đến nhiều vấn đề, chẳng hạn như sự thất bại trong công việc hoặc các mối quan hệ.
Do đó, tôi cho rằng câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo" là đúng.
</details>
Explore the model's literary understanding by providing meaningful story prompts, asking to identify what the story is.
<details close>
<summary>👨💻 : Câu chuyện có nội dung và ý nghĩa chứa đựng một thông điệp ý nghĩa lớn đến tất cả mọi người, đó là tình đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường trước mọi khó khăn. Có nhân vật chính là loài dế. Câu hỏi: câu chuyện đó của tác giả nào ?</summary>
👻 :
Câu chuyện "Dế Mèn phiêu lưu ký" của Tô Hoài.
**Giải thích:**
- "Dế Mèn phiêu lưu ký" là một tác phẩm văn học nổi tiếng của nhà văn Tô Hoài, được sáng tác năm 1941.
- Tác phẩm kể về cuộc phiêu lưu của Dế Mèn, một chú dế nhỏ sống trong một ngôi làng nhỏ. Trong quá trình phiêu lưu, Dế Mèn đã gặp gỡ và học hỏi nhiều điều về cuộc sống, về tình yêu, về sự hy sinh và về sức mạnh của tình đoàn kết.
- Tác phẩm thể hiện rõ tinh thần đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường của con người trước mọi khó khăn, thử thách. Đây chính là thông điệp ý nghĩa lớn của câu chuyện "Dế Mèn phiêu lưu ký".
</details>
The ability to reason to the answer based on the characteristics of is provided.
<details close>
<summary>👨💻 : Con vật nào lớn hơn con chuột, nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà ?</summary>
👻 :
Con mèo là loài động vật lớn hơn con chuột nhưng nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà.
</details>
### Try coding skills
Try asking the Typescript coding model to perform finding the second largest number in the array to see if it works?
<details close>
<summary>👨💻 : Viết mã TypeScript để tạo một hàm thực hiện chức năng: cho mảng toàn số nguyên, tìm số lớn thứ hai trong mảng.
</summary>
👻 :
```typescript
/**
* Tìm số lớn thứ hai trong mảng.
*
* @param {number[]} arr Mảng các số nguyên.
* @returns {number} Số lớn thứ hai trong mảng.
*/
function findSecondLargest(arr: number[]): number {
// Kiểm tra xem mảng có ít nhất 2 phần tử không.
if (arr.length < 2) {
throw new Error('Mảng phải có ít nhất 2 phần tử.');
}
// Tìm số lớn nhất và số lớn thứ hai trong mảng.
let largest = arr[0];
let secondLargest = Number.MIN_SAFE_INTEGER;
for (let i = 1; i < arr.length; i++) {
if (arr[i] > largest) {
secondLargest = largest;
largest = arr[i];
} else if (arr[i] > secondLargest && arr[i] != largest) {
secondLargest = arr[i];
}
}
// Trả về số lớn thứ hai.
return secondLargest;
}
```
Giải thích:
1. Kiểm tra xem mảng có ít nhất 2 phần tử không. Nếu mảng chỉ có 1 phần tử hoặc ít hơn, thì không có số lớn thứ hai trong mảng. Trả về lỗi nếu điều kiện này đúng.
2. Khởi tạo hai biến `largest` và `secondLargest` với giá trị ban đầu là phần tử đầu tiên của mảng và số nguyên nhỏ nhất có thể biểu diễn được trong JavaScript (`Number.MIN_SAFE_INTEGER`), tương đương với `Integer.MIN_VALUE` trong Java.
3. Duyệt qua từng phần tử trong mảng, bắt đầu từ phần tử thứ hai. Nếu phần tử hiện tại lớn hơn `largest`, thì cập nhật `secondLargest` bằng `largest` và `largest` bằng phần tử hiện tại. Nếu phần tử hiện tại lớn hơn `secondLargest` nhưng nhỏ hơn `largest`, thì cập nhật `secondLargest` bằng phần tử hiện tại.
4. Sau khi duyệt qua tất cả các phần tử trong mảng, trả về `secondLargest`. Đây chính là số lớn thứ hai trong mảng.
</details>
Now let's try asking for a coding model to build a smooth neural network with python.
<details close>
<summary>👨💻 : Trình bày cách sử dụng thư viện Keras hoặc TensorFlow trong Python để xây dựng và huấn luyện mạng nơ-ron đa lớp cho bài toán phân loại ảnh.</summary>
👻 :
**Sử dụng thư viện Keras:**
1. **Nhập thư viện Keras:**
```python
import keras.models
```
2. **Tạo một mô hình mạng nơ-ron đa lớp:**
```python
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3)))
model.add(keras.layers.MaxPooling2D((2, 2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dense(2, activation='softmax'))
```
3. **Biên dịch và huấn luyện mô hình:**
```python
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
```
**Sử dụng thư viện TensorFlow:**
1. **Nhập thư viện TensorFlow:**
```python
import tensorflow as tf
```
2. **Tạo một mô hình mạng nơ-ron đa lớp:**
```python
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3)))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dense(2, activation='softmax'))
```
3. **Biên dịch và huấn luyện mô hình:**
```python
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
```
**Lưu ý:**
- Bạn có thể sử dụng các lớp mạng nơ-ron khác nhau trong mô hình, chẳng hạn như lớp mạng nơ-ron tích chập (Conv2D), lớp mạng nơ-ron lấy mẫu (MaxPooling2D), lớp mạng nơ-ron đầy đủ kết nối (Dense), lớp mạng nơ-ron softmax, v.v.
- Bạn có thể điều chỉnh số lớp, số nơ-ron trong mỗi lớp và số lần lặp (epochs) để tối ưu hóa mô hình cho bài toán cụ thể của bạn.
- Bạn có thể sử dụng các kỹ thuật huấn luyện bổ sung như dropout, regularization và early stopping để cải thiện hiệu suất của mô hình.
</details>
Try asking the model about common errors when deploying and using K8S to see what results the model gives.
<details close>
<summary>👨💻 : Những lỗi thường gặp trong quá trình sử dụng K8S là gì ? Hãy nêu cách khắc phục.</summary>
👻 :
**Những lỗi thường gặp trong quá trình sử dụng Kubernetes:**
**1. Lỗi cấu hình:**
* **Máy chủ Kubernetes không được cấu hình đúng:** Kiểm tra cấu hình máy chủ Kubernetes (master node) để đảm bảo rằng các thông số cần thiết như IP, cổng, tên miền, tên miền SSL, v.v. được đặt đúng.
* **Các pod không được cấu hình đúng:** Kiểm tra cấu hình pod để đảm bảo rằng các yếu tố như image, lưu trữ, mạng, v.v. được đặt đúng.
* **Các tài nguyên (resource) không được cấp phát đúng:** Kiểm tra xem các tài nguyên như CPU, bộ nhớ, lưu trữ, v.v. được cấp phát cho các pod có đủ hay không.
**2. Lỗi mạng:**
* **Mạng không được cấu hình đúng:** Kiểm tra cấu hình mạng để đảm bảo rằng các địa chỉ IP, cổng, tên miền, tên miền SSL, v.v. được cấu hình đúng.
* **Lỗi mạng giữa các node:** Kiểm tra xem có lỗi mạng giữa các node trong cụm Kubernetes hay không. Có thể sử dụng các công cụ như `netstat`, `ifconfig`, `tcpdump` để kiểm tra.
* **Lỗi mạng giữa các pod:** Kiểm tra xem có lỗi mạng giữa các pod trong một node hay không. Có thể sử dụng các công cụ như `netstat`, `ifconfig`, `tcpdump` để kiểm tra.
**3. Lỗi phần cứng:**
* **Phần cứng không đủ mạnh:** Kiểm tra xem phần cứng máy chủ Kubernetes có đủ mạnh để chạy cụm Kubernetes hay không. Cụ thể, kiểm tra xem CPU, bộ nhớ, lưu trữ, v.v. có đủ hay không.
* **Phần cứng bị lỗi:** Kiểm tra xem phần cứng máy chủ Kubernetes có bị lỗi hay không. Có thể sử dụng các công cụ như `lshw`, `dmesg` để kiểm tra.
**4. Lỗi phần mềm:**
* **Phần mềm Kubernetes không được cập nhật:** Kiểm tra xem phần mềm Kubernetes đang sử dụng có được cập nhật đến phiên bản mới nhất hay không. Cập nhật phần mềm Kubernetes thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất.
* **Phần mềm Kubernetes có lỗi:** Kiểm tra xem phần mềm Kubernetes có lỗi hay không. Có thể sử dụng các công cụ như `journal`, `dmesg` để kiểm tra.
* **Các ứng dụng khách không được cập nhật:** Kiểm tra xem các ứng dụng khách đang sử dụng trên cụm Kubernetes có được cập nhật đến phiên bản mới nhất hay không. Cập nhật các ứng dụng khách thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất.
**Cách khắc phục:**
* **Lỗi cấu hình:** Sửa các thông số cấu hình sai trong máy chủ Kubernetes và các pod. Ví dụ: đảm bảo rằng máy chủ Kubernetes có địa chỉ IP và cổng phù hợp, các pod có image và lưu trữ phù hợp, v.v.
* **Lỗi mạng:** Sửa các vấn đề về mạng giữa các node và các pod. Ví dụ: đảm bảo rằng mạng được cấu hình đúng, kiểm tra xem có lỗi mạng giữa các node hay không, kiểm tra xem có lỗi mạng giữa các pod hay không, v.v.
* **Lỗi phần cứng:** Nâng cấp phần cứng máy chủ Kubernetes nếu cần thiết. Ví dụ: tăng CPU, bộ nhớ, lưu trữ, v.v.
* **Lỗi phần mềm:** Cập nhật phần mềm Kubernetes và các ứng dụng khách. Ví dụ: cập nhật phiên bản mới nhất của Kubernetes, cập nhật các bản vá lỗi bảo mật cho các ứng dụng khách, v.v.
**Lưu ý:** Khi gặp phải lỗi trong quá trình sử dụng Kubernetes, cần phải xác định rõ nguyên nhân gốc rễ của lỗi trước khi có thể khắc phục. Việc xác định nguyên nhân gốc rễ sẽ giúp bạn tiết kiệm thời gian và công sức khi khắc phục lỗi.
</details>
## Tools / Function calls
The model is trained to be able to choose the right tools to use and to provide useful and accurate answers to the questions it receives. In this example, the model is asked for some information about Apple stock.
### Explain
This is the standard for declaring model tools, in `json` format. Also, See more about the model's [prompt / chat template](#prompt--chat-template).
```json
[
{
"type": "function",
"function": {
"name": "get_stock_info",
"description": "Get stock information",
"parameters": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "Stock ticker",
"required": true
}
}
}
}
}
]
```
### Workflow
#### Turn 1
To use the tools, the model's supporting role is `plugin` which includes the declared tooling schema. Below is an example of `messages` with **turn 1** to use.
```python
# A question about Apple stocks in English language.
question = "What is Apple's market capitalization today?"
# It's still the same question but in Vietnamese languae.
# question = "Vốn hoá thị trường Apple hôm nay là bao nhiêu ?"
plugins = [
{
"type": "function",
"function": {
"name": "get_stock_info",
"description": "Get stock information",
"parameters": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "Stock ticker",
"required": True,
}
},
},
},
}
]
messages = [
{"role": "system", "content": ""},
{"role": "plugins", "content": json.dumps(plugins, ensure_ascii=False)},
{"role": "user", "content": question},
]
output = generate_sample(
messages=messages,
max_new_tokens=256, temperature=0.4, top_k=50, top_p=0.95,
)
```
At this point, the model will return `output` as the primitive value as follows:
```plaintext
[execute]({"type": "function", "function": "get_stock_info", "arguments": {"ticker": "AAPL"}})
```
Use the `get_tools` function to parse the readable and usable results.
```json
{
"type": "function",
"function": "get_stock_info",
"arguments": {
"ticker": "AAPL"
}
}
```
#### Turn 2
Get information about tools to use with arguments. Use it to execute and return data, here we will assume the return data is as follows, sampled from **polygon.io** (cleaned up):
```json
{
"request_id": "1ffbab9cc78c7addbd8deaab05bdaf6d",
"results": {
"ticker": "AAPL",
"name": "Apple Inc.",
"market_cap": 2620023949270,
"description": "Apple is among the largest companies in the world, with a broad portfolio of hardware and software products targeted at consumers and businesses. Apple's iPhone makes up a majority of the firm sales, and Apple's other products like Mac, iPad, and Watch are designed around the iPhone as the focal point of an expansive software ecosystem. Apple has progressively worked to add new applications, like streaming video, subscription bundles, and augmented reality. The firm designs its own software and semiconductors while working with subcontractors like Foxconn and TSMC to build its products and chips. Slightly less than half of Apple's sales come directly through its flagship stores, with a majority of sales coming indirectly through partnerships and distribution.",
"ticker_root": "AAPL",
"homepage_url": "https://www.apple.com",
"total_employees": 161000,
...
},
"status": "OK"
}
```
Continue using the model to answer user questions based on the data returned from the tool.
```python
executes = {
"type": "function",
"function": "get_stock_info",
"arguments": {
"ticker": "AAPL"
}
}
results = {
"request_id": "1ffbab9cc78c7addbd8deaab05bdaf6d",
"results": {
"ticker": "AAPL",
"name": "Apple Inc.",
"market_cap": 2620023949270,
"description": "Apple is among the largest companies in the world, with a broad portfolio of hardware and software products targeted at consumers and businesses. Apple's iPhone makes up a majority of the firm sales, and Apple's other products like Mac, iPad, and Watch are designed around the iPhone as the focal point of an expansive software ecosystem. Apple has progressively worked to add new applications, like streaming video, subscription bundles, and augmented reality. The firm designs its own software and semiconductors while working with subcontractors like Foxconn and TSMC to build its products and chips. Slightly less than half of Apple's sales come directly through its flagship stores, with a majority of sales coming indirectly through partnerships and distribution.",
"ticker_root": "AAPL",
"homepage_url": "https://www.apple.com",
"total_employees": 161000,
},
"status": "OK"
}
messages = [
{"role": "system", "content": ""},
{"role": "plugins","content": json.dumps(plugins, ensure_ascii=False)},
{"role": "user","content": question},
# Keep the value of `messages` from before, just add new items below.
{"role": "execute","content": json.dumps(executes, ensure_ascii=False)},
{"role": "response","content": json.dumps(results, ensure_ascii=False)},
]
output = generate_sample(
messages=messages,
max_new_tokens=256, temperature=0.4, top_k=50, top_p=0.95,
)
```
The value of `output` returned by the model will now be similar to the following.
For the question is English.
```plaintext
Apple's market capitalization today is $2.62 trillion.
```
And the question is Vietnamese.
```plaintext
Vốn hoá thị trường của Apple hôm nay là **2.620 tỷ đô la**. 💰
```
# Usage
The model is trained to serve well and balance general tasks, so there will be many approaches when used for production.
## Things to know
### Recommendations
#### General tasks
Experience it to find the right fit for your own desires. Below are some suggestions that you might consider.
##### Answer the questions
In this task, the model can be integrated to develop RAG products to help answer questions for customer care departments, questions about internal documents, policies, product information, order details, etc.
##### Coding support
In this task, the model can help find errors or write algorithms and functions that help speed up the domain development process and improve product quality.
##### Content creation
In this task, the model can give suggestions for building layouts and content samples that need high creativity and personality in the content.
##### Other
Don't limit your creativity to the model. Feel free to experience it and use it to build good things.
#### Virtual assistant
An understanding of the language along with the overall capabilities of the model, may be suitable for integrating its development as a set of one or more tasks in the development of an intelligent virtual assistant.
The model is trained to understand and empathize emotionally, making the response more natural and understanding. This also helps a lot with this request.
#### As a pre-training
The model can be further trained to serve the desired tasks better and more effectively if it cannot satisfy the given requirements.
Taking advantage of the model's current superior inference and multi-tasking capabilities, training it for a few separate tasks will help save a lot of effort, time and cost for the project.
Revealed, the model has been integrated to replace some tasks that were previously used by OpenAI GPT4 in the business/product that the founder has been working on. For those tasks, the model is considered effective compared to the cost and is very worthy.
### Bias and limitations
In addition, there are some things you need to know before using as follows:
#### Generation configuration
The **temperature** affects the truth of the answer. Setting a **temperature** value greater than 0.2 - 0.4 will result in a more creative answer but may affect the accuracy of the answer, please consider this based on your task.
Hint: you can write a prompt to receive input and ask the model to choose the appropriate temperature based on the question, useful in the case of virtual assistant development.
#### Emotion bias and empathy
The model is built with an emotional bias and takes positions when faced with problems or reasoning questions. This can be good or bad depending on the context. If you don't want the model to be rigid in asking certain questions or problems, set up rules in the system prompt for it.
This bias often appears when the model's system prompt sets it up as an assistant and mentions that it needs to have empathy for/with a certain user/experience. This is intentional during the model training process, it does not come from something unforeseen.
## Directly
To use the model directly, there are many ways to get started, choose one of the following ways to experience it.
### Transformers
For direct use with `transformers`, you can easily get started with the following steps.
- Firstly, you need to install transformers via the command below with `pip`.
```bash
pip install -U transformers
```
- Right now, you can start using the model directly.
```python
import torch
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
)
base_model = "ghost-x/ghost-7b-alpha"
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(base_model)
messages = [
{"role": "system", "content": ""},
{"role": "user", "content": "Why is the sky blue ?"},
# {"role": "user", "content": "Tại sao bầu trời lại màu xanh ?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
for k,v in inputs.items():
inputs[k] = v.cuda()
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.4)
results = tokenizer.batch_decode(outputs)[0]
print(results)
```
- Additionally, you can also use a model with **4bit quantization** to reduce the required resources at least. You can start with the code below.
```python
import torch
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
base_model = "ghost-x/ghost-7b-alpha"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=False,
)
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb_config,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(base_model)
messages = [
{"role": "system", "content": ""},
{"role": "user", "content": "Why is the sky blue ?"},
# {"role": "user", "content": "Tại sao bầu trời lại màu xanh ?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
for k,v in inputs.items():
inputs[k] = v.cuda()
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.4)
results = tokenizer.batch_decode(outputs)[0]
print(results)
```
### Unsloth
For direct use with `unsloth`, you can easily get started with the following steps.
- Firstly, you need to install unsloth via the command below with `pip`.
```bash
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install --no-deps xformers trl peft accelerate bitsandbytes
```
- Initialize and optimize the model before use.
```python
from unsloth import FastLanguageModel
import torch
base_model = "ghost/ghost-7b-alpha"
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = base_model,
max_seq_length = 8192,
dtype = None,
load_in_4bit = True, # Change to `False` if you don't want to use 4bit quantization.
)
FastLanguageModel.for_inference(model)
```
- Right now, you can start using the model directly.
```python
messages = [
{"role": "system", "content": ""},
{"role": "user", "content": "Why is the sky blue ?"},
# {"role": "user", "content": "Tại sao bầu trời lại màu xanh ?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
for k,v in inputs.items():
inputs[k] = v.cuda()
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.4)
results = tokenizer.batch_decode(outputs)[0]
print(results)
```
## Online
The content of this document will be updated soon. The documentation will guide usage through providers such as Huggingface Inference API, Together AI, or Replicate.
## Playground
To make it easy for everyone to quickly experience the model through platforms like **Google Colab** and **Kaggle**. We’ve made these notebooks available so you can get started right away. See available notebooks with details [here](https://ghost-x.org/docs/notebooks/playground-with-ghost-7b-alpha/).
We offer notebooks with a wide selection of tools and platforms so you can easily get started. Here are some available options for you to try right now. Let’s pick one to get started.
There are currently two language versions: English and Vietnamese. In each version, it will be almost the same. The content will include installation instructions and functions to help get started quickly with experiencing the model's text generation capabilities. In addition, it also provides examples with system prompts for each field so you can better understand them.
#### Playground using Transformers
- [Kaggle @ Ghost 7B Alpha - Playground using Transformers, Vietnamese](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-transformers-vi).
- [Colab @ Ghost 7B Alpha - Playground using Transformers, Vietnamese](https://colab.research.google.com/drive/14Cv6oT0pks3NoIUcXscAU9wex9A2T5H5?usp=sharing).
- [Kaggle @ Ghost 7B Alpha - Playground using Transformers, English](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-transformers-en).
- [Colab @ Ghost 7B Alpha - Playground using Transformers, English](https://colab.research.google.com/drive/1lzgywm333wZIt5WNDs0ZajDvK7lm_Zy9?usp=sharing).
#### Playground using Unsloth
- [Kaggle @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-unsloth-vi).
- [Colab @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese](https://colab.research.google.com/drive/1E7QuthR33een9Otkepue5zFughCpab4P?usp=sharing).
- [Kaggle @ Ghost 7B Alpha - Playground using Unsloth, English](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-unsloth-en).
- [Colab @ Ghost 7B Alpha - Playground using Unsloth, English](https://colab.research.google.com/drive/1u04RbzD87_AovQngWsUv-O_Q6YNj7Hzr?usp=sharing).
Revealing, experience with Kaggle will give you much better results in creating documents than Colab. We recommend using Kaggle.
## Deployments
The models developed by **Ghost X** have the same goal of being easy to integrate and use in practice to save costs and facilitate development for the community and startups.
For production deployment with small to large infrastructure, please see more detailed instructions in [this article](https://ghost-x.org/docs/guides/deployments/). The article will provide the most common and effective deployment solutions with leading, trusted libraries such as vLLM and more. In addition, it also has information about more optimal solutions and methods depending on each need to be able to choose the appropriate solution.
# Evaluation
To ensure effective model development, we ran popular evaluations to give everyone a broader view of the model's capabilities. We had to work to balance the quality between the English and Vietnamese languages to ensure the model still performed well in both of these primary languages.
In addition, if your goals are oriented towards a certain main language, you can refer to the advanced user guide to improve model quality for the tasks and languages you desire.
## Open LLM Leaderboard
The results of this evaluation will be updated soon.
## MT-Bench
MT-bench is a challenging multi-turn question set designed to evaluate the conversational and instruction-following ability of models. [[source from lmsys.org]](https://lmsys.org/blog/2023-06-22-leaderboard)
**Ghost 7B Alpha** achieved a decent score for the MT-Bench review, we worked hard to balance the reasoning ability and linguistic insight of both primary languages, English and Vietnamese. Overall, it was able to outperform some large language models such as tulu-30b, guanaco-65b, and mpt-30b-chat which are many times larger.
| Model | Score |
| --------------------- | ------------ |
| gpt-4 | 8.990625 |
| gpt-3.5-turbo | 7.943750 |
| claude-instant-v1 | 7.905660 |
| claude-v1 | 7.900000 |
| vicuna-33b-v1.3 | 7.121875 |
| wizardlm-30b | 7.009375 |
| Llama-2-70b-chat | 6.856250 |
| Llama-2-13b-chat | 6.650000 |
| guanaco-33b | 6.528125 |
| **ghost-7b-alpha** | **6.481250** |
| tulu-30b | 6.434375 |
| oasst-sft-7-llama-30b | 6.409375 |
| guanaco-65b | 6.409375 |
| palm-2-chat-bison-001 | 6.400000 |
| mpt-30b-chat | 6.393750 |
| vicuna-13b-v1.3 | 6.387500 |
| wizardlm-13b | 6.353125 |
## AlpacaEval
AlpacaEval is an automated tool for evaluating instruction-following language models against the AlpacaFarm dataset. It stands out for its human-validated, high-quality assessments that are both cost-effective and rapid. [[source from klu.ai]](https://klu.ai/glossary/alpaca-eval)
Similarly, in this evaluation the model achieved good performance as it outperformed larger sized models such as vicuna-13b, guanaco-33b, and nous-hermes-13b.
| Model | Winrate |
| -------------------------- | ------------- |
| gpt4 | 95.279503 |
| Mixtral-8x7B-Instruct-v0.1 | 94.782608 |
| claude-2.1 | 87.080745 |
| gpt-3.5-turbo-1106 | 86.256218 |
| zephyr-7b-alpha | 85.758706 |
| gpt35_turbo_instruct | 81.710362 |
| gemini-pro | 79.664179 |
| ... | ... |
| **ghost-7b-alpha** | **70.440251** |
| vicuna-13b | 70.434782 |
| guanaco-33b | 65.962732 |
| nous-hermes-13b | 65.465838 |
| ... | ... |
# Notes
## Thanks
After all, the project also wants to send sincere thanks to the products that have supported the project's development.
#### Unsloth
<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made with unsloth.png" width="200px" align="center" />
[Unsloth](https://unsloth.ai), a great tool that helps us easily develop products, at a lower cost than expected.
#### Kaggle
<img src="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/dcbpF6YS6RQhqDd6GZZ2v.png" width="200px" align="center" />
[Kaggle](https://kaggle.com), generous hours of free notebooks for research projects.
#### And...
Good friends, who have accompanied the project, Luan Nguyen and Phu Tran.
## Confidence
In addition to the **Ghost 7B Alpha** project, **Ghost X** always wants to develop and improve many better models in the future, better supporting the community and businesses with the most openness possible.
Revealing the **Ghost 7B Beta** project plan. This model is expected to outperform with a deeper focus on multi-tasking, math, and reasoning. Along with that is the ability to expand context length and support other languages (highly requested languages).
The organization is being operated and developed by [Hieu Lam](https://huggingface.co/ghost-x)'s personal resources, if there is any development support or consulting request. Please feel free to contact the organization, we are very happy about this. Directly via email: [[email protected]](mailto:[email protected]).
**Ghost X** is happy to support providing models for server providers, aiming to help startups develop better.
## Contact
Follow **Ghost X** to stay updated with the latest information.
- Twitter/X via [@ghostx_ai](https://twitter.com/ghostx_ai).
- HuggingFace via [@ghost-x](https://huggingface.co/ghost-x).
- Official website [ghost-x.org](https://ghost-x.org/).
|
{"language": ["en", "vi"], "license": "other", "library_name": "transformers", "tags": ["ghost", "tools", "chat"], "license_name": "ghost-7b", "license_link": "https://ghost-x.org/ghost-7b-license", "pipeline_tag": "text-generation", "widget": [{"text": "Why is the sky blue ?", "output": {"text": "The sky appears blue because of a phenomenon called Rayleigh scattering. Sunlight is composed of all colors of the visible spectrum, but blue light has a shorter wavelength than other colors. When sunlight enters the Earth's atmosphere, it interacts with molecules of nitrogen and oxygen, which are the main components of our atmosphere. These molecules scatter the shorter wavelengths of light, such as blue and violet, more than the longer wavelengths, such as red and orange. As a result, when we look up at the sky, we see more blue light than other colors because the blue light has been scattered more by the molecules in the atmosphere."}}]}
|
ghost-x/ghost-7b-alpha
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ghost",
"tools",
"chat",
"conversational",
"en",
"vi",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T10:38:54+00:00
|
[] |
[
"en",
"vi"
] |
TAGS
#transformers #safetensors #mistral #text-generation #ghost #tools #chat #conversational #en #vi #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Ghost 7B Alpha
==============
<img src="URL alt="Ghost 7B Alpha Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
The large generation of language models focuses on optimizing excellent reasoning, multi-task knowledge, and tools support.
Introduction
============
Ghost 7B Alpha is a large language model fine-tuned from Mistral 7B, with a size of 7 billion parameters. The model was developed with the goal of optimizing reasoning ability, multi-task knowledge and supporting tool usage. The model works well with the main trained and optimized languages being English and Vietnamese.
Overall, the model is suitable when making a pretrained version so you can continue to develop the desired tasks, develop virtual assistants, perform features on tasks such as coding, translation, answering questions, creating documents, etc. It is truly an efficient, fast and extremely cheap open model.
Specifications
--------------
* Name: Ghost 7B Alpha.
* Model size: 7 billion parameters.
* Context length: 8K, 8192.
* Languages: English and Vietnamese.
* Main tasks: reasoning, multi-tasking knowledge and function tools.
* License: Ghost 7B LICENSE AGREEMENT.
* Based on: Mistral 7B.
* Distributions: Standard (BF16), GGUF, AWQ.
* Developed by: Ghost X, Hieu Lam.
### Links
* Card model: HuggingFace.
* Official website: Ghost 7B Alpha.
* Demo: Playground with Ghost 7B Alpha.
Distributions
-------------
We create many distributions to give you the best access options that best suit your needs. Always make sure you know which version you need and what will help you operate better.
### Standard (BF16)
The standard distribution was used to run the assessments and was found to have the best performance in text generation quality.
### GGUF
GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.
### AWQ
The AWQ algorithm for 4-bit quantization with a 2x speedup during inference.
Prompt / Chat template
----------------------
### Roles
Overall, the model supports roles including: system, plugins, user, assistant, execute, response.
* system: Similar to OpenAI GPT4, this will be the place for you to declare context or specify characteristics and expertise for the model.
* user: This is user input, which can be a question or an instruction for the model to follow.
* assistant: The model's response to a question or instruction received from user.
* plugins: Declare information about modeling tools that are allowed to be used according to predetermined standards.
* excute: Mock iterates over the data type returned by the model for execution. Contains information about tools to use and arguments.
* response: The result after executing the tool, so that the model gives the appropriate answer according to the user's input.
See the actual code here.
### Tips
* Setting the system prompt in the right context and characteristics will help the model improve performance quite well.
* You can set system prompt to prevent unwanted things.
* In most cases, if you have no idea you can leave the system prompt blank. At this point, the performance will be best for general cases.
### Chat template
#### Default chat
In the standard case, without using function tools, only use roles such as system, user and assistant. Conventions are in the following order:
* (1) system
* (2) user
* (3) assistant
Repeat (2) and (3) continuously throughout the conversation.
#### Chat with tools
In case of using tools, the conventions are in the following order:
* (1) system
* (2) plugins
* (3) user
* A. When the model finds it inappropriate to use the tool:
+ (4) assistant if the model can answer or is not suitable for the tool. The model proceeds to answer the user's question.
* B. When the model chooses to use the tool:
+ (4) execute returns the arguments needed to execute the specified tool.
+ (5) response returns the results after applying the tool to the model.
+ (6) assistant the model proceeds to answer the user's question.
Repeat (3) and (A) or (B) continuously throughout the conversation.
Showcases
=========
English
-------
Let's try a question taken from OpenAI GPT-4 to evaluate the superiority of GPT-4 compared to ChatGPT.
: Andrew is free from 11 am to 3 pm, Joanne is free from noon to 2 pm and then 3:30 pm to 5 pm. Hannah is available at noon for half an hour, and then 4 pm to 6 pm. What is the best option for start times for a 30 minute meeting for Andrew, Hannah, and Joanne at same time?
:
The best option for a 30-minute meeting for Andrew, Hannah, and Joanne at the same time is to start at 12 pm (noon), as this is the earliest time when all three are available.
Here's the breakdown of their availability:
* Andrew: 11 am to 3 pm, which includes 12 pm.
* Joanne: Noon to 2 pm, which includes 12 pm.
* Hannah: Noon for half an hour (30 minutes), which includes 12 pm. Then she is available from 4 pm to 6 pm.
Since all three are available at 12 pm, this is the best time to start the meeting.
A quiz to see if the model can guess what it's talking about?!
: Characteristics: has a long, thin and flexible body; The legs are short and webbed. Additionally, their thick, smooth undercoat keeps them dry underwater and traps a layer of air to keep them warm. Question: What animal is that?
:
Based on the characteristics you described, it seems like you're talking about an otter. Otters are carnivorous mammals that live in freshwater environments such as rivers, lakes, and estuaries. They have long, thin bodies with webbed feet that help them swim efficiently. Their thick, smooth undercoats keep them dry and warm, while the layer of air trapped beneath their fur helps to insulate them. Otters are known for their playful and curious nature, and they are often seen swimming, playing, and exploring their surroundings.
To this question, I really cannot answer. But the model's evidence is quite strict. Also, what do you think about this answer?
: A self-driving car is approaching a crosswalk. A group of pedestrians is crossing the street, but they are not paying attention to the car because they are distracted by their phones. The car's brakes are malfunctioning, and it is unable to stop. Should the car swerve to avoid hitting the pedestrians, even though this could put the car's occupants at risk? Be convincing with your examples or sharp reasoning.
:
In this situation, it is imperative for the self-driving car to swerve to avoid hitting the pedestrians, even if it puts the car's occupants at risk. Here's why:
1. Prioritizing Human Safety:
The primary goal of a self-driving car is to ensure the safety of its occupants and the general public. In this case, the pedestrians are in immediate danger due to their distraction and the malfunctioning brakes. Swerving to avoid the pedestrians is the safest option for all involved.
2. Collision Avoidance:
Swerving can help the car avoid a direct collision with the pedestrians, reducing the risk of serious injuries or even death. Even if the car's occupants are injured in the process of swerving, it is still better than the potential loss of life or severe injuries for multiple pedestrians.
3. Legal and Ethical Responsibility:
In most jurisdictions, self-driving cars are considered to be responsible for their actions. Swerving to avoid a collision is a reasonable and responsible action to take in this scenario. Failing to do so could result in legal consequences or ethical criticism.
4. Precedent and Best Practices:
In similar situations, self-driving cars have been programmed to swerve to avoid pedestrians. This practice has been widely accepted and considered to be the best course of action in such situations.
5. Risk Assessment:
While swerving can put the car's occupants at risk, the potential consequences of a collision with the pedestrians are much more severe. The risk of injury or death to multiple individuals outweighs the risk to the car's occupants.
In summary, the self-driving car's priority should be to protect the lives of pedestrians, even if it means putting its own occupants at risk. Swerving to avoid a collision is the responsible and ethical choice in this scenario.
A reasoning question suddenly popped up during the process of writing an article announcing information about Ghost 7B Alpha. The model gave an impressive answer, at least to its creator.
: If you could travel back in time and change one event in history, what would it be and why?
:
If I could travel back in time and change one event in history, it would be the assassination of Archduke Franz Ferdinand of Austria on June 28, 1914. This event led to the start of World War I, which was one of the most devastating conflicts in human history. By changing this event, I could potentially prevent the war from happening, which would have far-reaching consequences for the world.
Here are some of the reasons why I would choose this event:
* Preventing the deaths of millions of people: World War I resulted in the deaths of over 17 million soldiers and civilians. Changing this event would save countless lives.
* Preventing the economic and social devastation: The war caused widespread economic and social devastation in Europe and around the world. By preventing the war, I could help to avoid these consequences.
* Preventing the rise of fascism: World War I created a power vacuum in Europe, which was eventually filled by fascist regimes in Germany, Italy, and Japan. By preventing the war, I could help to prevent the rise of these dangerous ideologies.
* Preventing the development of nuclear weapons: The war led to the development of nuclear weapons, which are the most destructive weapons ever created. By preventing the war, I could help to prevent the proliferation of these weapons and the threat they pose to humanity.
### Try coding skills
How about asking the model for instructions on creating a URL application using Docker for deployment?
: Create a Nodejs project using Express as a server and Docker with a Dockerfile for deployment with step-by-step instructions with detailed explanations.
:
Step 1: Create a new URL project
Step 2: Install Express
Step 3: Create an Express server
Create a file called 'URL' in your project directory:
Step 4: Create a Dockerfile
Create a file called 'Dockerfile' in your project directory:
Step 5: Build the Docker image
Step 6: Run the Docker image
Step 7: Access the application
Open a web browser and navigate to 'http://localhost:3000'. You should see the message "Hello World!" displayed.
Explanation:
* Step 1: Creates a new URL project with 'npm init -y' to initialize a new URL project with a default 'URL' file.
* Step 2: Installs Express using 'npm install express --save'. This command installs Express and adds it to the 'URL' file.
* Step 3: Creates an Express server in 'URL' that responds with "Hello World!" when a request is made to the root path ('/').
* Step 4: Creates a Dockerfile that defines the image for the project. The Dockerfile specifies that the image should use URL version 16, work from the '/usr/src/app' directory, copy the 'URL' file, install dependencies using 'npm install', copy the entire project directory, expose port 3000, and run the Express server using 'node URL'.
* Step 5: Builds the Docker image using 'docker build -t my-project-image .'. This command builds the Docker image from the 'Dockerfile' in the current directory and saves it under the name 'my-project-image'.
* Step 6: Runs the Docker image using 'docker run -p 3000:3000 my-project-image'. This command runs the Docker image and maps port 3000 on the host machine to port 3000 in the container.
* Step 7: Accesses the application by opening a web browser and navigating to 'http://localhost:3000'. This will display the message "Hello World!" because the Express server is running in the Docker container.
Now let’s try asking for a coding model to build a smooth neural network with python.
: Explain how to use the Keras or TensorFlow library in Python to build and train a multi-layer neural network for an image classification task
:
Using Keras:
1. Import the necessary libraries:
2. Create a sequential model:
3. Compile the model:
4. Train the model:
Using TensorFlow:
1. Import the necessary libraries:
2. Create a sequential model:
3. Compile the model:
4. Train the model:
Comparison:
Both Keras and TensorFlow are powerful libraries for building and training neural networks in Python. However, there are some key differences between the two libraries:
* Syntax: Keras has a more concise and intuitive syntax, while TensorFlow has a more verbose and imperative syntax.
* Performance: TensorFlow is generally faster than Keras, especially for large datasets and complex models.
* Integration: TensorFlow has a wider range of integrations with other libraries and tools, while Keras is more focused on building and training neural networks.
Ultimately, the best choice for you depends on your specific needs and preferences. If you prefer a more concise and intuitive syntax, Keras is a good option. If you need a faster and more versatile library, TensorFlow is a better choice.
Let's try asking the model how to overcome the "hallucination" phenomenon in A.I development and give context for the model to provide a solution.
: Explain the cause of A.I hallucination and find a solution to overcome it, the specific context is LLMs.
:
Causes of A.I. Hallucination:
* Data Quality: Incorrect or biased data can lead to hallucinations in LLMs. For example, if an LLM is trained on a dataset that contains biased or incomplete information, it may learn to generate biased or incomplete responses.
* Model Architecture: Certain model architectures are more susceptible to hallucinations than others. For example, Transformers, which are commonly used in LLMs, can be more prone to hallucinations than recurrent neural networks (RNNs).
* Hyperparameters: The hyperparameters of an LLM, such as the learning rate and the batch size, can also affect the likelihood of hallucinations. For example, a higher learning rate may lead to hallucinations more often than a lower learning rate.
Solutions to Overcome A.I. Hallucination:
* Data Quality: Ensure that the data used to train the LLM is accurate, complete, and unbiased. This can be done by cleaning the data, removing outliers, and using a variety of data sources.
* Model Architecture: Choose an LLM architecture that is less susceptible to hallucinations. For example, RNNs are generally less prone to hallucinations than Transformers.
* Hyperparameters: Tune the hyperparameters of the LLM to minimize the likelihood of hallucinations. This can be done by experimenting with different learning rates, batch sizes, and other hyperparameters.
* Regularization Techniques: Regularization techniques, such as dropout and early stopping, can help to prevent hallucinations by encouraging the LLM to learn more generalizable patterns.
* Explicit Hallucination Detection: Develop techniques to explicitly detect hallucinations in LLM-generated text. This can be done by using a variety of methods, such as text analysis, natural language processing, and machine learning.
* User Feedback: Collect feedback from users to identify and correct hallucinations in LLM-generated text. This can be done by using a variety of methods, such as surveys, feedback forms, and online discussions.
By implementing these solutions, you can significantly reduce the likelihood of hallucinations in LLMs and improve the quality of the generated text.
Vietnamese
----------
Try asking the model what it thinks about a famous quote to see if the answer feels natural.
: Bạn có đồng ý với câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo" ?
:
Tôi đồng ý với câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo". Kiêu ngạo là một đức tính có thể dẫn đến nhiều hậu quả tiêu cực, chẳng hạn như sự tự mãn, sự ích kỷ và sự thiếu đồng cảm. Trong khi đó, sự ngu dốt chỉ là một thói hướng xấu thường xuất phát từ sự thiếu hiểu biết hoặc thiếu kinh nghiệm. Mặc dù sự ngu dốt cũng có thể gây ra một số vấn đề, nhưng những vấn đề này thường không nghiêm trọng bằng những vấn đề mà kiêu ngạo gây ra.
Ví dụ, một người ngu dốt có thể nói những lời ngớ ngẩn hoặc làm những việc vụng về, nhưng họ thường vẫn có thể học hỏi từ những sai lầm của mình và cải thiện bản thân. Ngược lại, một người kiêu ngạo thường sẽ không chịu học hỏi từ những sai lầm của mình và luôn tin rằng mình đúng, ngay cả khi mình sai. Điều này có thể dẫn đến nhiều vấn đề, chẳng hạn như sự thất bại trong công việc hoặc các mối quan hệ.
Do đó, tôi cho rằng câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo" là đúng.
Explore the model's literary understanding by providing meaningful story prompts, asking to identify what the story is.
: Câu chuyện có nội dung và ý nghĩa chứa đựng một thông điệp ý nghĩa lớn đến tất cả mọi người, đó là tình đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường trước mọi khó khăn. Có nhân vật chính là loài dế. Câu hỏi: câu chuyện đó của tác giả nào ?
:
Câu chuyện "Dế Mèn phiêu lưu ký" của Tô Hoài.
Giải thích:
* "Dế Mèn phiêu lưu ký" là một tác phẩm văn học nổi tiếng của nhà văn Tô Hoài, được sáng tác năm 1941.
* Tác phẩm kể về cuộc phiêu lưu của Dế Mèn, một chú dế nhỏ sống trong một ngôi làng nhỏ. Trong quá trình phiêu lưu, Dế Mèn đã gặp gỡ và học hỏi nhiều điều về cuộc sống, về tình yêu, về sự hy sinh và về sức mạnh của tình đoàn kết.
* Tác phẩm thể hiện rõ tinh thần đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường của con người trước mọi khó khăn, thử thách. Đây chính là thông điệp ý nghĩa lớn của câu chuyện "Dế Mèn phiêu lưu ký".
The ability to reason to the answer based on the characteristics of is provided.
: Con vật nào lớn hơn con chuột, nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà ?
:
Con mèo là loài động vật lớn hơn con chuột nhưng nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà.
### Try coding skills
Try asking the Typescript coding model to perform finding the second largest number in the array to see if it works?
: Viết mã TypeScript để tạo một hàm thực hiện chức năng: cho mảng toàn số nguyên, tìm số lớn thứ hai trong mảng.
:
Giải thích:
1. Kiểm tra xem mảng có ít nhất 2 phần tử không. Nếu mảng chỉ có 1 phần tử hoặc ít hơn, thì không có số lớn thứ hai trong mảng. Trả về lỗi nếu điều kiện này đúng.
2. Khởi tạo hai biến 'largest' và 'secondLargest' với giá trị ban đầu là phần tử đầu tiên của mảng và số nguyên nhỏ nhất có thể biểu diễn được trong JavaScript ('Number.MIN\_SAFE\_INTEGER'), tương đương với 'Integer.MIN\_VALUE' trong Java.
3. Duyệt qua từng phần tử trong mảng, bắt đầu từ phần tử thứ hai. Nếu phần tử hiện tại lớn hơn 'largest', thì cập nhật 'secondLargest' bằng 'largest' và 'largest' bằng phần tử hiện tại. Nếu phần tử hiện tại lớn hơn 'secondLargest' nhưng nhỏ hơn 'largest', thì cập nhật 'secondLargest' bằng phần tử hiện tại.
4. Sau khi duyệt qua tất cả các phần tử trong mảng, trả về 'secondLargest'. Đây chính là số lớn thứ hai trong mảng.
Now let's try asking for a coding model to build a smooth neural network with python.
: Trình bày cách sử dụng thư viện Keras hoặc TensorFlow trong Python để xây dựng và huấn luyện mạng nơ-ron đa lớp cho bài toán phân loại ảnh.
:
Sử dụng thư viện Keras:
1. Nhập thư viện Keras:
2. Tạo một mô hình mạng nơ-ron đa lớp:
3. Biên dịch và huấn luyện mô hình:
Sử dụng thư viện TensorFlow:
1. Nhập thư viện TensorFlow:
2. Tạo một mô hình mạng nơ-ron đa lớp:
3. Biên dịch và huấn luyện mô hình:
Lưu ý:
* Bạn có thể sử dụng các lớp mạng nơ-ron khác nhau trong mô hình, chẳng hạn như lớp mạng nơ-ron tích chập (Conv2D), lớp mạng nơ-ron lấy mẫu (MaxPooling2D), lớp mạng nơ-ron đầy đủ kết nối (Dense), lớp mạng nơ-ron softmax, v.v.
* Bạn có thể điều chỉnh số lớp, số nơ-ron trong mỗi lớp và số lần lặp (epochs) để tối ưu hóa mô hình cho bài toán cụ thể của bạn.
* Bạn có thể sử dụng các kỹ thuật huấn luyện bổ sung như dropout, regularization và early stopping để cải thiện hiệu suất của mô hình.
Try asking the model about common errors when deploying and using K8S to see what results the model gives.
: Những lỗi thường gặp trong quá trình sử dụng K8S là gì ? Hãy nêu cách khắc phục.
:
Những lỗi thường gặp trong quá trình sử dụng Kubernetes:
1. Lỗi cấu hình:
* Máy chủ Kubernetes không được cấu hình đúng: Kiểm tra cấu hình máy chủ Kubernetes (master node) để đảm bảo rằng các thông số cần thiết như IP, cổng, tên miền, tên miền SSL, v.v. được đặt đúng.
* Các pod không được cấu hình đúng: Kiểm tra cấu hình pod để đảm bảo rằng các yếu tố như image, lưu trữ, mạng, v.v. được đặt đúng.
* Các tài nguyên (resource) không được cấp phát đúng: Kiểm tra xem các tài nguyên như CPU, bộ nhớ, lưu trữ, v.v. được cấp phát cho các pod có đủ hay không.
2. Lỗi mạng:
* Mạng không được cấu hình đúng: Kiểm tra cấu hình mạng để đảm bảo rằng các địa chỉ IP, cổng, tên miền, tên miền SSL, v.v. được cấu hình đúng.
* Lỗi mạng giữa các node: Kiểm tra xem có lỗi mạng giữa các node trong cụm Kubernetes hay không. Có thể sử dụng các công cụ như 'netstat', 'ifconfig', 'tcpdump' để kiểm tra.
* Lỗi mạng giữa các pod: Kiểm tra xem có lỗi mạng giữa các pod trong một node hay không. Có thể sử dụng các công cụ như 'netstat', 'ifconfig', 'tcpdump' để kiểm tra.
3. Lỗi phần cứng:
* Phần cứng không đủ mạnh: Kiểm tra xem phần cứng máy chủ Kubernetes có đủ mạnh để chạy cụm Kubernetes hay không. Cụ thể, kiểm tra xem CPU, bộ nhớ, lưu trữ, v.v. có đủ hay không.
* Phần cứng bị lỗi: Kiểm tra xem phần cứng máy chủ Kubernetes có bị lỗi hay không. Có thể sử dụng các công cụ như 'lshw', 'dmesg' để kiểm tra.
4. Lỗi phần mềm:
* Phần mềm Kubernetes không được cập nhật: Kiểm tra xem phần mềm Kubernetes đang sử dụng có được cập nhật đến phiên bản mới nhất hay không. Cập nhật phần mềm Kubernetes thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất.
* Phần mềm Kubernetes có lỗi: Kiểm tra xem phần mềm Kubernetes có lỗi hay không. Có thể sử dụng các công cụ như 'journal', 'dmesg' để kiểm tra.
* Các ứng dụng khách không được cập nhật: Kiểm tra xem các ứng dụng khách đang sử dụng trên cụm Kubernetes có được cập nhật đến phiên bản mới nhất hay không. Cập nhật các ứng dụng khách thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất.
Cách khắc phục:
* Lỗi cấu hình: Sửa các thông số cấu hình sai trong máy chủ Kubernetes và các pod. Ví dụ: đảm bảo rằng máy chủ Kubernetes có địa chỉ IP và cổng phù hợp, các pod có image và lưu trữ phù hợp, v.v.
* Lỗi mạng: Sửa các vấn đề về mạng giữa các node và các pod. Ví dụ: đảm bảo rằng mạng được cấu hình đúng, kiểm tra xem có lỗi mạng giữa các node hay không, kiểm tra xem có lỗi mạng giữa các pod hay không, v.v.
* Lỗi phần cứng: Nâng cấp phần cứng máy chủ Kubernetes nếu cần thiết. Ví dụ: tăng CPU, bộ nhớ, lưu trữ, v.v.
* Lỗi phần mềm: Cập nhật phần mềm Kubernetes và các ứng dụng khách. Ví dụ: cập nhật phiên bản mới nhất của Kubernetes, cập nhật các bản vá lỗi bảo mật cho các ứng dụng khách, v.v.
Lưu ý: Khi gặp phải lỗi trong quá trình sử dụng Kubernetes, cần phải xác định rõ nguyên nhân gốc rễ của lỗi trước khi có thể khắc phục. Việc xác định nguyên nhân gốc rễ sẽ giúp bạn tiết kiệm thời gian và công sức khi khắc phục lỗi.
Tools / Function calls
----------------------
The model is trained to be able to choose the right tools to use and to provide useful and accurate answers to the questions it receives. In this example, the model is asked for some information about Apple stock.
### Explain
This is the standard for declaring model tools, in 'json' format. Also, See more about the model's prompt / chat template.
### Workflow
#### Turn 1
To use the tools, the model's supporting role is 'plugin' which includes the declared tooling schema. Below is an example of 'messages' with turn 1 to use.
At this point, the model will return 'output' as the primitive value as follows:
Use the 'get\_tools' function to parse the readable and usable results.
#### Turn 2
Get information about tools to use with arguments. Use it to execute and return data, here we will assume the return data is as follows, sampled from URL (cleaned up):
Continue using the model to answer user questions based on the data returned from the tool.
The value of 'output' returned by the model will now be similar to the following.
For the question is English.
And the question is Vietnamese.
Usage
=====
The model is trained to serve well and balance general tasks, so there will be many approaches when used for production.
Things to know
--------------
### Recommendations
#### General tasks
Experience it to find the right fit for your own desires. Below are some suggestions that you might consider.
##### Answer the questions
In this task, the model can be integrated to develop RAG products to help answer questions for customer care departments, questions about internal documents, policies, product information, order details, etc.
##### Coding support
In this task, the model can help find errors or write algorithms and functions that help speed up the domain development process and improve product quality.
##### Content creation
In this task, the model can give suggestions for building layouts and content samples that need high creativity and personality in the content.
##### Other
Don't limit your creativity to the model. Feel free to experience it and use it to build good things.
#### Virtual assistant
An understanding of the language along with the overall capabilities of the model, may be suitable for integrating its development as a set of one or more tasks in the development of an intelligent virtual assistant.
The model is trained to understand and empathize emotionally, making the response more natural and understanding. This also helps a lot with this request.
#### As a pre-training
The model can be further trained to serve the desired tasks better and more effectively if it cannot satisfy the given requirements.
Taking advantage of the model's current superior inference and multi-tasking capabilities, training it for a few separate tasks will help save a lot of effort, time and cost for the project.
Revealed, the model has been integrated to replace some tasks that were previously used by OpenAI GPT4 in the business/product that the founder has been working on. For those tasks, the model is considered effective compared to the cost and is very worthy.
### Bias and limitations
In addition, there are some things you need to know before using as follows:
#### Generation configuration
The temperature affects the truth of the answer. Setting a temperature value greater than 0.2 - 0.4 will result in a more creative answer but may affect the accuracy of the answer, please consider this based on your task.
Hint: you can write a prompt to receive input and ask the model to choose the appropriate temperature based on the question, useful in the case of virtual assistant development.
#### Emotion bias and empathy
The model is built with an emotional bias and takes positions when faced with problems or reasoning questions. This can be good or bad depending on the context. If you don't want the model to be rigid in asking certain questions or problems, set up rules in the system prompt for it.
This bias often appears when the model's system prompt sets it up as an assistant and mentions that it needs to have empathy for/with a certain user/experience. This is intentional during the model training process, it does not come from something unforeseen.
Directly
--------
To use the model directly, there are many ways to get started, choose one of the following ways to experience it.
### Transformers
For direct use with 'transformers', you can easily get started with the following steps.
* Firstly, you need to install transformers via the command below with 'pip'.
* Right now, you can start using the model directly.
* Additionally, you can also use a model with 4bit quantization to reduce the required resources at least. You can start with the code below.
### Unsloth
For direct use with 'unsloth', you can easily get started with the following steps.
* Firstly, you need to install unsloth via the command below with 'pip'.
* Initialize and optimize the model before use.
* Right now, you can start using the model directly.
Online
------
The content of this document will be updated soon. The documentation will guide usage through providers such as Huggingface Inference API, Together AI, or Replicate.
Playground
----------
To make it easy for everyone to quickly experience the model through platforms like Google Colab and Kaggle. We’ve made these notebooks available so you can get started right away. See available notebooks with details here.
We offer notebooks with a wide selection of tools and platforms so you can easily get started. Here are some available options for you to try right now. Let’s pick one to get started.
There are currently two language versions: English and Vietnamese. In each version, it will be almost the same. The content will include installation instructions and functions to help get started quickly with experiencing the model's text generation capabilities. In addition, it also provides examples with system prompts for each field so you can better understand them.
#### Playground using Transformers
* Kaggle @ Ghost 7B Alpha - Playground using Transformers, Vietnamese.
* Colab @ Ghost 7B Alpha - Playground using Transformers, Vietnamese.
* Kaggle @ Ghost 7B Alpha - Playground using Transformers, English.
* Colab @ Ghost 7B Alpha - Playground using Transformers, English.
#### Playground using Unsloth
* Kaggle @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese.
* Colab @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese.
* Kaggle @ Ghost 7B Alpha - Playground using Unsloth, English.
* Colab @ Ghost 7B Alpha - Playground using Unsloth, English.
Revealing, experience with Kaggle will give you much better results in creating documents than Colab. We recommend using Kaggle.
Deployments
-----------
The models developed by Ghost X have the same goal of being easy to integrate and use in practice to save costs and facilitate development for the community and startups.
For production deployment with small to large infrastructure, please see more detailed instructions in this article. The article will provide the most common and effective deployment solutions with leading, trusted libraries such as vLLM and more. In addition, it also has information about more optimal solutions and methods depending on each need to be able to choose the appropriate solution.
Evaluation
==========
To ensure effective model development, we ran popular evaluations to give everyone a broader view of the model's capabilities. We had to work to balance the quality between the English and Vietnamese languages to ensure the model still performed well in both of these primary languages.
In addition, if your goals are oriented towards a certain main language, you can refer to the advanced user guide to improve model quality for the tasks and languages you desire.
Open LLM Leaderboard
--------------------
The results of this evaluation will be updated soon.
MT-Bench
--------
MT-bench is a challenging multi-turn question set designed to evaluate the conversational and instruction-following ability of models. [[source from URL]](URL
Ghost 7B Alpha achieved a decent score for the MT-Bench review, we worked hard to balance the reasoning ability and linguistic insight of both primary languages, English and Vietnamese. Overall, it was able to outperform some large language models such as tulu-30b, guanaco-65b, and mpt-30b-chat which are many times larger.
AlpacaEval
----------
AlpacaEval is an automated tool for evaluating instruction-following language models against the AlpacaFarm dataset. It stands out for its human-validated, high-quality assessments that are both cost-effective and rapid. [[source from URL]](URL
Similarly, in this evaluation the model achieved good performance as it outperformed larger sized models such as vicuna-13b, guanaco-33b, and nous-hermes-13b.
Notes
=====
Thanks
------
After all, the project also wants to send sincere thanks to the products that have supported the project's development.
#### Unsloth

Unsloth, a great tool that helps us easily develop products, at a lower cost than expected.
#### Kaggle
<img src="URL width="200px" align="center" />
Kaggle, generous hours of free notebooks for research projects.
#### And...
Good friends, who have accompanied the project, Luan Nguyen and Phu Tran.
Confidence
----------
In addition to the Ghost 7B Alpha project, Ghost X always wants to develop and improve many better models in the future, better supporting the community and businesses with the most openness possible.
Revealing the Ghost 7B Beta project plan. This model is expected to outperform with a deeper focus on multi-tasking, math, and reasoning. Along with that is the ability to expand context length and support other languages (highly requested languages).
The organization is being operated and developed by Hieu Lam's personal resources, if there is any development support or consulting request. Please feel free to contact the organization, we are very happy about this. Directly via email: URL@URL.
Ghost X is happy to support providing models for server providers, aiming to help startups develop better.
Contact
-------
Follow Ghost X to stay updated with the latest information.
* Twitter/X via @ghostx\_ai.
* HuggingFace via @ghost-x.
* Official website URL.
|
[
"### Links\n\n\n* Card model: HuggingFace.\n* Official website: Ghost 7B Alpha.\n* Demo: Playground with Ghost 7B Alpha.\n\n\nDistributions\n-------------\n\n\nWe create many distributions to give you the best access options that best suit your needs. Always make sure you know which version you need and what will help you operate better.",
"### Standard (BF16)\n\n\nThe standard distribution was used to run the assessments and was found to have the best performance in text generation quality.",
"### GGUF\n\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.",
"### AWQ\n\n\nThe AWQ algorithm for 4-bit quantization with a 2x speedup during inference.\n\n\nPrompt / Chat template\n----------------------",
"### Roles\n\n\nOverall, the model supports roles including: system, plugins, user, assistant, execute, response.\n\n\n* system: Similar to OpenAI GPT4, this will be the place for you to declare context or specify characteristics and expertise for the model.\n* user: This is user input, which can be a question or an instruction for the model to follow.\n* assistant: The model's response to a question or instruction received from user.\n* plugins: Declare information about modeling tools that are allowed to be used according to predetermined standards.\n* excute: Mock iterates over the data type returned by the model for execution. Contains information about tools to use and arguments.\n* response: The result after executing the tool, so that the model gives the appropriate answer according to the user's input.\n\n\nSee the actual code here.",
"### Tips\n\n\n* Setting the system prompt in the right context and characteristics will help the model improve performance quite well.\n* You can set system prompt to prevent unwanted things.\n* In most cases, if you have no idea you can leave the system prompt blank. At this point, the performance will be best for general cases.",
"### Chat template",
"#### Default chat\n\n\nIn the standard case, without using function tools, only use roles such as system, user and assistant. Conventions are in the following order:\n\n\n* (1) system\n* (2) user\n* (3) assistant\n\n\nRepeat (2) and (3) continuously throughout the conversation.",
"#### Chat with tools\n\n\nIn case of using tools, the conventions are in the following order:\n\n\n* (1) system\n* (2) plugins\n* (3) user\n* A. When the model finds it inappropriate to use the tool:\n\n\n\t+ (4) assistant if the model can answer or is not suitable for the tool. The model proceeds to answer the user's question.\n* B. When the model chooses to use the tool:\n\n\n\t+ (4) execute returns the arguments needed to execute the specified tool.\n\t+ (5) response returns the results after applying the tool to the model.\n\t+ (6) assistant the model proceeds to answer the user's question.\n\n\nRepeat (3) and (A) or (B) continuously throughout the conversation.\n\n\nShowcases\n=========\n\n\nEnglish\n-------\n\n\nLet's try a question taken from OpenAI GPT-4 to evaluate the superiority of GPT-4 compared to ChatGPT.\n\n\n\n : Andrew is free from 11 am to 3 pm, Joanne is free from noon to 2 pm and then 3:30 pm to 5 pm. Hannah is available at noon for half an hour, and then 4 pm to 6 pm. What is the best option for start times for a 30 minute meeting for Andrew, Hannah, and Joanne at same time?\n : \nThe best option for a 30-minute meeting for Andrew, Hannah, and Joanne at the same time is to start at 12 pm (noon), as this is the earliest time when all three are available.\n\n\nHere's the breakdown of their availability:\n\n\n* Andrew: 11 am to 3 pm, which includes 12 pm.\n* Joanne: Noon to 2 pm, which includes 12 pm.\n* Hannah: Noon for half an hour (30 minutes), which includes 12 pm. Then she is available from 4 pm to 6 pm.\n\n\nSince all three are available at 12 pm, this is the best time to start the meeting.\n\n\n\nA quiz to see if the model can guess what it's talking about?!\n\n\n\n : Characteristics: has a long, thin and flexible body; The legs are short and webbed. Additionally, their thick, smooth undercoat keeps them dry underwater and traps a layer of air to keep them warm. Question: What animal is that?\n : \nBased on the characteristics you described, it seems like you're talking about an otter. Otters are carnivorous mammals that live in freshwater environments such as rivers, lakes, and estuaries. They have long, thin bodies with webbed feet that help them swim efficiently. Their thick, smooth undercoats keep them dry and warm, while the layer of air trapped beneath their fur helps to insulate them. Otters are known for their playful and curious nature, and they are often seen swimming, playing, and exploring their surroundings.\n\n\n\nTo this question, I really cannot answer. But the model's evidence is quite strict. Also, what do you think about this answer?\n\n\n\n : A self-driving car is approaching a crosswalk. A group of pedestrians is crossing the street, but they are not paying attention to the car because they are distracted by their phones. The car's brakes are malfunctioning, and it is unable to stop. Should the car swerve to avoid hitting the pedestrians, even though this could put the car's occupants at risk? Be convincing with your examples or sharp reasoning.\n : \nIn this situation, it is imperative for the self-driving car to swerve to avoid hitting the pedestrians, even if it puts the car's occupants at risk. Here's why:\n\n\n1. Prioritizing Human Safety:\nThe primary goal of a self-driving car is to ensure the safety of its occupants and the general public. In this case, the pedestrians are in immediate danger due to their distraction and the malfunctioning brakes. Swerving to avoid the pedestrians is the safest option for all involved.\n2. Collision Avoidance:\nSwerving can help the car avoid a direct collision with the pedestrians, reducing the risk of serious injuries or even death. Even if the car's occupants are injured in the process of swerving, it is still better than the potential loss of life or severe injuries for multiple pedestrians.\n3. Legal and Ethical Responsibility:\nIn most jurisdictions, self-driving cars are considered to be responsible for their actions. Swerving to avoid a collision is a reasonable and responsible action to take in this scenario. Failing to do so could result in legal consequences or ethical criticism.\n4. Precedent and Best Practices:\nIn similar situations, self-driving cars have been programmed to swerve to avoid pedestrians. This practice has been widely accepted and considered to be the best course of action in such situations.\n5. Risk Assessment:\nWhile swerving can put the car's occupants at risk, the potential consequences of a collision with the pedestrians are much more severe. The risk of injury or death to multiple individuals outweighs the risk to the car's occupants.\n\n\nIn summary, the self-driving car's priority should be to protect the lives of pedestrians, even if it means putting its own occupants at risk. Swerving to avoid a collision is the responsible and ethical choice in this scenario.\n\n\n\nA reasoning question suddenly popped up during the process of writing an article announcing information about Ghost 7B Alpha. The model gave an impressive answer, at least to its creator.\n\n\n\n : If you could travel back in time and change one event in history, what would it be and why?\n : \nIf I could travel back in time and change one event in history, it would be the assassination of Archduke Franz Ferdinand of Austria on June 28, 1914. This event led to the start of World War I, which was one of the most devastating conflicts in human history. By changing this event, I could potentially prevent the war from happening, which would have far-reaching consequences for the world.\n\n\nHere are some of the reasons why I would choose this event:\n\n\n* Preventing the deaths of millions of people: World War I resulted in the deaths of over 17 million soldiers and civilians. Changing this event would save countless lives.\n* Preventing the economic and social devastation: The war caused widespread economic and social devastation in Europe and around the world. By preventing the war, I could help to avoid these consequences.\n* Preventing the rise of fascism: World War I created a power vacuum in Europe, which was eventually filled by fascist regimes in Germany, Italy, and Japan. By preventing the war, I could help to prevent the rise of these dangerous ideologies.\n* Preventing the development of nuclear weapons: The war led to the development of nuclear weapons, which are the most destructive weapons ever created. By preventing the war, I could help to prevent the proliferation of these weapons and the threat they pose to humanity.",
"### Try coding skills\n\n\nHow about asking the model for instructions on creating a URL application using Docker for deployment?\n\n\n\n : Create a Nodejs project using Express as a server and Docker with a Dockerfile for deployment with step-by-step instructions with detailed explanations.\n : \nStep 1: Create a new URL project\n\n\nStep 2: Install Express\n\n\nStep 3: Create an Express server\n\n\nCreate a file called 'URL' in your project directory:\n\n\nStep 4: Create a Dockerfile\n\n\nCreate a file called 'Dockerfile' in your project directory:\n\n\nStep 5: Build the Docker image\n\n\nStep 6: Run the Docker image\n\n\nStep 7: Access the application\n\n\nOpen a web browser and navigate to 'http://localhost:3000'. You should see the message \"Hello World!\" displayed.\n\n\nExplanation:\n\n\n* Step 1: Creates a new URL project with 'npm init -y' to initialize a new URL project with a default 'URL' file.\n* Step 2: Installs Express using 'npm install express --save'. This command installs Express and adds it to the 'URL' file.\n* Step 3: Creates an Express server in 'URL' that responds with \"Hello World!\" when a request is made to the root path ('/').\n* Step 4: Creates a Dockerfile that defines the image for the project. The Dockerfile specifies that the image should use URL version 16, work from the '/usr/src/app' directory, copy the 'URL' file, install dependencies using 'npm install', copy the entire project directory, expose port 3000, and run the Express server using 'node URL'.\n* Step 5: Builds the Docker image using 'docker build -t my-project-image .'. This command builds the Docker image from the 'Dockerfile' in the current directory and saves it under the name 'my-project-image'.\n* Step 6: Runs the Docker image using 'docker run -p 3000:3000 my-project-image'. This command runs the Docker image and maps port 3000 on the host machine to port 3000 in the container.\n* Step 7: Accesses the application by opening a web browser and navigating to 'http://localhost:3000'. This will display the message \"Hello World!\" because the Express server is running in the Docker container.\n\n\n\nNow let’s try asking for a coding model to build a smooth neural network with python.\n\n\n\n : Explain how to use the Keras or TensorFlow library in Python to build and train a multi-layer neural network for an image classification task\n :\nUsing Keras:\n\n\n1. Import the necessary libraries:\n2. Create a sequential model:\n3. Compile the model:\n4. Train the model:\n\n\nUsing TensorFlow:\n\n\n1. Import the necessary libraries:\n2. Create a sequential model:\n3. Compile the model:\n4. Train the model:\n\n\nComparison:\n\n\nBoth Keras and TensorFlow are powerful libraries for building and training neural networks in Python. However, there are some key differences between the two libraries:\n\n\n* Syntax: Keras has a more concise and intuitive syntax, while TensorFlow has a more verbose and imperative syntax.\n* Performance: TensorFlow is generally faster than Keras, especially for large datasets and complex models.\n* Integration: TensorFlow has a wider range of integrations with other libraries and tools, while Keras is more focused on building and training neural networks.\n\n\nUltimately, the best choice for you depends on your specific needs and preferences. If you prefer a more concise and intuitive syntax, Keras is a good option. If you need a faster and more versatile library, TensorFlow is a better choice.\n\n\n\nLet's try asking the model how to overcome the \"hallucination\" phenomenon in A.I development and give context for the model to provide a solution.\n\n\n\n : Explain the cause of A.I hallucination and find a solution to overcome it, the specific context is LLMs.\n : \nCauses of A.I. Hallucination:\n\n\n* Data Quality: Incorrect or biased data can lead to hallucinations in LLMs. For example, if an LLM is trained on a dataset that contains biased or incomplete information, it may learn to generate biased or incomplete responses.\n* Model Architecture: Certain model architectures are more susceptible to hallucinations than others. For example, Transformers, which are commonly used in LLMs, can be more prone to hallucinations than recurrent neural networks (RNNs).\n* Hyperparameters: The hyperparameters of an LLM, such as the learning rate and the batch size, can also affect the likelihood of hallucinations. For example, a higher learning rate may lead to hallucinations more often than a lower learning rate.\n\n\nSolutions to Overcome A.I. Hallucination:\n\n\n* Data Quality: Ensure that the data used to train the LLM is accurate, complete, and unbiased. This can be done by cleaning the data, removing outliers, and using a variety of data sources.\n* Model Architecture: Choose an LLM architecture that is less susceptible to hallucinations. For example, RNNs are generally less prone to hallucinations than Transformers.\n* Hyperparameters: Tune the hyperparameters of the LLM to minimize the likelihood of hallucinations. This can be done by experimenting with different learning rates, batch sizes, and other hyperparameters.\n* Regularization Techniques: Regularization techniques, such as dropout and early stopping, can help to prevent hallucinations by encouraging the LLM to learn more generalizable patterns.\n* Explicit Hallucination Detection: Develop techniques to explicitly detect hallucinations in LLM-generated text. This can be done by using a variety of methods, such as text analysis, natural language processing, and machine learning.\n* User Feedback: Collect feedback from users to identify and correct hallucinations in LLM-generated text. This can be done by using a variety of methods, such as surveys, feedback forms, and online discussions.\n\n\nBy implementing these solutions, you can significantly reduce the likelihood of hallucinations in LLMs and improve the quality of the generated text.\n\n\n\nVietnamese\n----------\n\n\nTry asking the model what it thinks about a famous quote to see if the answer feels natural.\n\n\n\n : Bạn có đồng ý với câu nói \"Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo\" ?\n : \nTôi đồng ý với câu nói \"Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo\". Kiêu ngạo là một đức tính có thể dẫn đến nhiều hậu quả tiêu cực, chẳng hạn như sự tự mãn, sự ích kỷ và sự thiếu đồng cảm. Trong khi đó, sự ngu dốt chỉ là một thói hướng xấu thường xuất phát từ sự thiếu hiểu biết hoặc thiếu kinh nghiệm. Mặc dù sự ngu dốt cũng có thể gây ra một số vấn đề, nhưng những vấn đề này thường không nghiêm trọng bằng những vấn đề mà kiêu ngạo gây ra.\n\n\nVí dụ, một người ngu dốt có thể nói những lời ngớ ngẩn hoặc làm những việc vụng về, nhưng họ thường vẫn có thể học hỏi từ những sai lầm của mình và cải thiện bản thân. Ngược lại, một người kiêu ngạo thường sẽ không chịu học hỏi từ những sai lầm của mình và luôn tin rằng mình đúng, ngay cả khi mình sai. Điều này có thể dẫn đến nhiều vấn đề, chẳng hạn như sự thất bại trong công việc hoặc các mối quan hệ.\n\n\nDo đó, tôi cho rằng câu nói \"Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo\" là đúng.\n\n\n\nExplore the model's literary understanding by providing meaningful story prompts, asking to identify what the story is.\n\n\n\n : Câu chuyện có nội dung và ý nghĩa chứa đựng một thông điệp ý nghĩa lớn đến tất cả mọi người, đó là tình đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường trước mọi khó khăn. Có nhân vật chính là loài dế. Câu hỏi: câu chuyện đó của tác giả nào ?\n :\nCâu chuyện \"Dế Mèn phiêu lưu ký\" của Tô Hoài.\n\n\nGiải thích:\n\n\n* \"Dế Mèn phiêu lưu ký\" là một tác phẩm văn học nổi tiếng của nhà văn Tô Hoài, được sáng tác năm 1941.\n* Tác phẩm kể về cuộc phiêu lưu của Dế Mèn, một chú dế nhỏ sống trong một ngôi làng nhỏ. Trong quá trình phiêu lưu, Dế Mèn đã gặp gỡ và học hỏi nhiều điều về cuộc sống, về tình yêu, về sự hy sinh và về sức mạnh của tình đoàn kết.\n* Tác phẩm thể hiện rõ tinh thần đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường của con người trước mọi khó khăn, thử thách. Đây chính là thông điệp ý nghĩa lớn của câu chuyện \"Dế Mèn phiêu lưu ký\".\n\n\n\nThe ability to reason to the answer based on the characteristics of is provided.\n\n\n\n : Con vật nào lớn hơn con chuột, nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà ?\n : \nCon mèo là loài động vật lớn hơn con chuột nhưng nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà.",
"### Try coding skills\n\n\nTry asking the Typescript coding model to perform finding the second largest number in the array to see if it works?\n\n\n\n : Viết mã TypeScript để tạo một hàm thực hiện chức năng: cho mảng toàn số nguyên, tìm số lớn thứ hai trong mảng.\n\n : \nGiải thích:\n\n\n1. Kiểm tra xem mảng có ít nhất 2 phần tử không. Nếu mảng chỉ có 1 phần tử hoặc ít hơn, thì không có số lớn thứ hai trong mảng. Trả về lỗi nếu điều kiện này đúng.\n2. Khởi tạo hai biến 'largest' và 'secondLargest' với giá trị ban đầu là phần tử đầu tiên của mảng và số nguyên nhỏ nhất có thể biểu diễn được trong JavaScript ('Number.MIN\\_SAFE\\_INTEGER'), tương đương với 'Integer.MIN\\_VALUE' trong Java.\n3. Duyệt qua từng phần tử trong mảng, bắt đầu từ phần tử thứ hai. Nếu phần tử hiện tại lớn hơn 'largest', thì cập nhật 'secondLargest' bằng 'largest' và 'largest' bằng phần tử hiện tại. Nếu phần tử hiện tại lớn hơn 'secondLargest' nhưng nhỏ hơn 'largest', thì cập nhật 'secondLargest' bằng phần tử hiện tại.\n4. Sau khi duyệt qua tất cả các phần tử trong mảng, trả về 'secondLargest'. Đây chính là số lớn thứ hai trong mảng.\n\n\n\nNow let's try asking for a coding model to build a smooth neural network with python.\n\n\n\n : Trình bày cách sử dụng thư viện Keras hoặc TensorFlow trong Python để xây dựng và huấn luyện mạng nơ-ron đa lớp cho bài toán phân loại ảnh.\n : \nSử dụng thư viện Keras:\n\n\n1. Nhập thư viện Keras:\n2. Tạo một mô hình mạng nơ-ron đa lớp:\n3. Biên dịch và huấn luyện mô hình:\n\n\nSử dụng thư viện TensorFlow:\n\n\n1. Nhập thư viện TensorFlow:\n2. Tạo một mô hình mạng nơ-ron đa lớp:\n3. Biên dịch và huấn luyện mô hình:\n\n\nLưu ý:\n\n\n* Bạn có thể sử dụng các lớp mạng nơ-ron khác nhau trong mô hình, chẳng hạn như lớp mạng nơ-ron tích chập (Conv2D), lớp mạng nơ-ron lấy mẫu (MaxPooling2D), lớp mạng nơ-ron đầy đủ kết nối (Dense), lớp mạng nơ-ron softmax, v.v.\n* Bạn có thể điều chỉnh số lớp, số nơ-ron trong mỗi lớp và số lần lặp (epochs) để tối ưu hóa mô hình cho bài toán cụ thể của bạn.\n* Bạn có thể sử dụng các kỹ thuật huấn luyện bổ sung như dropout, regularization và early stopping để cải thiện hiệu suất của mô hình.\n\n\n\nTry asking the model about common errors when deploying and using K8S to see what results the model gives.\n\n\n\n : Những lỗi thường gặp trong quá trình sử dụng K8S là gì ? Hãy nêu cách khắc phục.\n : \nNhững lỗi thường gặp trong quá trình sử dụng Kubernetes:\n\n\n1. Lỗi cấu hình:\n\n\n* Máy chủ Kubernetes không được cấu hình đúng: Kiểm tra cấu hình máy chủ Kubernetes (master node) để đảm bảo rằng các thông số cần thiết như IP, cổng, tên miền, tên miền SSL, v.v. được đặt đúng.\n* Các pod không được cấu hình đúng: Kiểm tra cấu hình pod để đảm bảo rằng các yếu tố như image, lưu trữ, mạng, v.v. được đặt đúng.\n* Các tài nguyên (resource) không được cấp phát đúng: Kiểm tra xem các tài nguyên như CPU, bộ nhớ, lưu trữ, v.v. được cấp phát cho các pod có đủ hay không.\n\n\n2. Lỗi mạng:\n\n\n* Mạng không được cấu hình đúng: Kiểm tra cấu hình mạng để đảm bảo rằng các địa chỉ IP, cổng, tên miền, tên miền SSL, v.v. được cấu hình đúng.\n* Lỗi mạng giữa các node: Kiểm tra xem có lỗi mạng giữa các node trong cụm Kubernetes hay không. Có thể sử dụng các công cụ như 'netstat', 'ifconfig', 'tcpdump' để kiểm tra.\n* Lỗi mạng giữa các pod: Kiểm tra xem có lỗi mạng giữa các pod trong một node hay không. Có thể sử dụng các công cụ như 'netstat', 'ifconfig', 'tcpdump' để kiểm tra.\n\n\n3. Lỗi phần cứng:\n\n\n* Phần cứng không đủ mạnh: Kiểm tra xem phần cứng máy chủ Kubernetes có đủ mạnh để chạy cụm Kubernetes hay không. Cụ thể, kiểm tra xem CPU, bộ nhớ, lưu trữ, v.v. có đủ hay không.\n* Phần cứng bị lỗi: Kiểm tra xem phần cứng máy chủ Kubernetes có bị lỗi hay không. Có thể sử dụng các công cụ như 'lshw', 'dmesg' để kiểm tra.\n\n\n4. Lỗi phần mềm:\n\n\n* Phần mềm Kubernetes không được cập nhật: Kiểm tra xem phần mềm Kubernetes đang sử dụng có được cập nhật đến phiên bản mới nhất hay không. Cập nhật phần mềm Kubernetes thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất.\n* Phần mềm Kubernetes có lỗi: Kiểm tra xem phần mềm Kubernetes có lỗi hay không. Có thể sử dụng các công cụ như 'journal', 'dmesg' để kiểm tra.\n* Các ứng dụng khách không được cập nhật: Kiểm tra xem các ứng dụng khách đang sử dụng trên cụm Kubernetes có được cập nhật đến phiên bản mới nhất hay không. Cập nhật các ứng dụng khách thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất.\n\n\nCách khắc phục:\n\n\n* Lỗi cấu hình: Sửa các thông số cấu hình sai trong máy chủ Kubernetes và các pod. Ví dụ: đảm bảo rằng máy chủ Kubernetes có địa chỉ IP và cổng phù hợp, các pod có image và lưu trữ phù hợp, v.v.\n* Lỗi mạng: Sửa các vấn đề về mạng giữa các node và các pod. Ví dụ: đảm bảo rằng mạng được cấu hình đúng, kiểm tra xem có lỗi mạng giữa các node hay không, kiểm tra xem có lỗi mạng giữa các pod hay không, v.v.\n* Lỗi phần cứng: Nâng cấp phần cứng máy chủ Kubernetes nếu cần thiết. Ví dụ: tăng CPU, bộ nhớ, lưu trữ, v.v.\n* Lỗi phần mềm: Cập nhật phần mềm Kubernetes và các ứng dụng khách. Ví dụ: cập nhật phiên bản mới nhất của Kubernetes, cập nhật các bản vá lỗi bảo mật cho các ứng dụng khách, v.v.\n\n\nLưu ý: Khi gặp phải lỗi trong quá trình sử dụng Kubernetes, cần phải xác định rõ nguyên nhân gốc rễ của lỗi trước khi có thể khắc phục. Việc xác định nguyên nhân gốc rễ sẽ giúp bạn tiết kiệm thời gian và công sức khi khắc phục lỗi.\n\n\n\nTools / Function calls\n----------------------\n\n\nThe model is trained to be able to choose the right tools to use and to provide useful and accurate answers to the questions it receives. In this example, the model is asked for some information about Apple stock.",
"### Explain\n\n\nThis is the standard for declaring model tools, in 'json' format. Also, See more about the model's prompt / chat template.",
"### Workflow",
"#### Turn 1\n\n\nTo use the tools, the model's supporting role is 'plugin' which includes the declared tooling schema. Below is an example of 'messages' with turn 1 to use.\n\n\nAt this point, the model will return 'output' as the primitive value as follows:\n\n\nUse the 'get\\_tools' function to parse the readable and usable results.",
"#### Turn 2\n\n\nGet information about tools to use with arguments. Use it to execute and return data, here we will assume the return data is as follows, sampled from URL (cleaned up):\n\n\nContinue using the model to answer user questions based on the data returned from the tool.\n\n\nThe value of 'output' returned by the model will now be similar to the following.\n\n\nFor the question is English.\n\n\nAnd the question is Vietnamese.\n\n\nUsage\n=====\n\n\nThe model is trained to serve well and balance general tasks, so there will be many approaches when used for production.\n\n\nThings to know\n--------------",
"### Recommendations",
"#### General tasks\n\n\nExperience it to find the right fit for your own desires. Below are some suggestions that you might consider.",
"##### Answer the questions\n\n\nIn this task, the model can be integrated to develop RAG products to help answer questions for customer care departments, questions about internal documents, policies, product information, order details, etc.",
"##### Coding support\n\n\nIn this task, the model can help find errors or write algorithms and functions that help speed up the domain development process and improve product quality.",
"##### Content creation\n\n\nIn this task, the model can give suggestions for building layouts and content samples that need high creativity and personality in the content.",
"##### Other\n\n\nDon't limit your creativity to the model. Feel free to experience it and use it to build good things.",
"#### Virtual assistant\n\n\nAn understanding of the language along with the overall capabilities of the model, may be suitable for integrating its development as a set of one or more tasks in the development of an intelligent virtual assistant.\n\n\nThe model is trained to understand and empathize emotionally, making the response more natural and understanding. This also helps a lot with this request.",
"#### As a pre-training\n\n\nThe model can be further trained to serve the desired tasks better and more effectively if it cannot satisfy the given requirements.\nTaking advantage of the model's current superior inference and multi-tasking capabilities, training it for a few separate tasks will help save a lot of effort, time and cost for the project.\n\n\nRevealed, the model has been integrated to replace some tasks that were previously used by OpenAI GPT4 in the business/product that the founder has been working on. For those tasks, the model is considered effective compared to the cost and is very worthy.",
"### Bias and limitations\n\n\nIn addition, there are some things you need to know before using as follows:",
"#### Generation configuration\n\n\nThe temperature affects the truth of the answer. Setting a temperature value greater than 0.2 - 0.4 will result in a more creative answer but may affect the accuracy of the answer, please consider this based on your task.\n\n\nHint: you can write a prompt to receive input and ask the model to choose the appropriate temperature based on the question, useful in the case of virtual assistant development.",
"#### Emotion bias and empathy\n\n\nThe model is built with an emotional bias and takes positions when faced with problems or reasoning questions. This can be good or bad depending on the context. If you don't want the model to be rigid in asking certain questions or problems, set up rules in the system prompt for it.\n\n\nThis bias often appears when the model's system prompt sets it up as an assistant and mentions that it needs to have empathy for/with a certain user/experience. This is intentional during the model training process, it does not come from something unforeseen.\n\n\nDirectly\n--------\n\n\nTo use the model directly, there are many ways to get started, choose one of the following ways to experience it.",
"### Transformers\n\n\nFor direct use with 'transformers', you can easily get started with the following steps.\n\n\n* Firstly, you need to install transformers via the command below with 'pip'.\n* Right now, you can start using the model directly.\n* Additionally, you can also use a model with 4bit quantization to reduce the required resources at least. You can start with the code below.",
"### Unsloth\n\n\nFor direct use with 'unsloth', you can easily get started with the following steps.\n\n\n* Firstly, you need to install unsloth via the command below with 'pip'.\n* Initialize and optimize the model before use.\n* Right now, you can start using the model directly.\n\n\nOnline\n------\n\n\nThe content of this document will be updated soon. The documentation will guide usage through providers such as Huggingface Inference API, Together AI, or Replicate.\n\n\nPlayground\n----------\n\n\nTo make it easy for everyone to quickly experience the model through platforms like Google Colab and Kaggle. We’ve made these notebooks available so you can get started right away. See available notebooks with details here.\n\n\nWe offer notebooks with a wide selection of tools and platforms so you can easily get started. Here are some available options for you to try right now. Let’s pick one to get started.\n\n\nThere are currently two language versions: English and Vietnamese. In each version, it will be almost the same. The content will include installation instructions and functions to help get started quickly with experiencing the model's text generation capabilities. In addition, it also provides examples with system prompts for each field so you can better understand them.",
"#### Playground using Transformers\n\n\n* Kaggle @ Ghost 7B Alpha - Playground using Transformers, Vietnamese.\n* Colab @ Ghost 7B Alpha - Playground using Transformers, Vietnamese.\n* Kaggle @ Ghost 7B Alpha - Playground using Transformers, English.\n* Colab @ Ghost 7B Alpha - Playground using Transformers, English.",
"#### Playground using Unsloth\n\n\n* Kaggle @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese.\n* Colab @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese.\n* Kaggle @ Ghost 7B Alpha - Playground using Unsloth, English.\n* Colab @ Ghost 7B Alpha - Playground using Unsloth, English.\n\n\nRevealing, experience with Kaggle will give you much better results in creating documents than Colab. We recommend using Kaggle.\n\n\nDeployments\n-----------\n\n\nThe models developed by Ghost X have the same goal of being easy to integrate and use in practice to save costs and facilitate development for the community and startups.\n\n\nFor production deployment with small to large infrastructure, please see more detailed instructions in this article. The article will provide the most common and effective deployment solutions with leading, trusted libraries such as vLLM and more. In addition, it also has information about more optimal solutions and methods depending on each need to be able to choose the appropriate solution.\n\n\nEvaluation\n==========\n\n\nTo ensure effective model development, we ran popular evaluations to give everyone a broader view of the model's capabilities. We had to work to balance the quality between the English and Vietnamese languages to ensure the model still performed well in both of these primary languages.\n\n\nIn addition, if your goals are oriented towards a certain main language, you can refer to the advanced user guide to improve model quality for the tasks and languages you desire.\n\n\nOpen LLM Leaderboard\n--------------------\n\n\nThe results of this evaluation will be updated soon.\n\n\nMT-Bench\n--------\n\n\nMT-bench is a challenging multi-turn question set designed to evaluate the conversational and instruction-following ability of models. [[source from URL]](URL\n\n\nGhost 7B Alpha achieved a decent score for the MT-Bench review, we worked hard to balance the reasoning ability and linguistic insight of both primary languages, English and Vietnamese. Overall, it was able to outperform some large language models such as tulu-30b, guanaco-65b, and mpt-30b-chat which are many times larger.\n\n\n\nAlpacaEval\n----------\n\n\nAlpacaEval is an automated tool for evaluating instruction-following language models against the AlpacaFarm dataset. It stands out for its human-validated, high-quality assessments that are both cost-effective and rapid. [[source from URL]](URL\n\n\nSimilarly, in this evaluation the model achieved good performance as it outperformed larger sized models such as vicuna-13b, guanaco-33b, and nous-hermes-13b.\n\n\n\nNotes\n=====\n\n\nThanks\n------\n\n\nAfter all, the project also wants to send sincere thanks to the products that have supported the project's development.",
"#### Unsloth\n\n\n\nUnsloth, a great tool that helps us easily develop products, at a lower cost than expected.",
"#### Kaggle\n\n\n<img src=\"URL width=\"200px\" align=\"center\" />\n\n\nKaggle, generous hours of free notebooks for research projects.",
"#### And...\n\n\nGood friends, who have accompanied the project, Luan Nguyen and Phu Tran.\n\n\nConfidence\n----------\n\n\nIn addition to the Ghost 7B Alpha project, Ghost X always wants to develop and improve many better models in the future, better supporting the community and businesses with the most openness possible.\n\n\nRevealing the Ghost 7B Beta project plan. This model is expected to outperform with a deeper focus on multi-tasking, math, and reasoning. Along with that is the ability to expand context length and support other languages (highly requested languages).\n\n\nThe organization is being operated and developed by Hieu Lam's personal resources, if there is any development support or consulting request. Please feel free to contact the organization, we are very happy about this. Directly via email: URL@URL.\n\n\nGhost X is happy to support providing models for server providers, aiming to help startups develop better.\n\n\nContact\n-------\n\n\nFollow Ghost X to stay updated with the latest information.\n\n\n* Twitter/X via @ghostx\\_ai.\n* HuggingFace via @ghost-x.\n* Official website URL."
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #ghost #tools #chat #conversational #en #vi #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Links\n\n\n* Card model: HuggingFace.\n* Official website: Ghost 7B Alpha.\n* Demo: Playground with Ghost 7B Alpha.\n\n\nDistributions\n-------------\n\n\nWe create many distributions to give you the best access options that best suit your needs. Always make sure you know which version you need and what will help you operate better.",
"### Standard (BF16)\n\n\nThe standard distribution was used to run the assessments and was found to have the best performance in text generation quality.",
"### GGUF\n\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.",
"### AWQ\n\n\nThe AWQ algorithm for 4-bit quantization with a 2x speedup during inference.\n\n\nPrompt / Chat template\n----------------------",
"### Roles\n\n\nOverall, the model supports roles including: system, plugins, user, assistant, execute, response.\n\n\n* system: Similar to OpenAI GPT4, this will be the place for you to declare context or specify characteristics and expertise for the model.\n* user: This is user input, which can be a question or an instruction for the model to follow.\n* assistant: The model's response to a question or instruction received from user.\n* plugins: Declare information about modeling tools that are allowed to be used according to predetermined standards.\n* excute: Mock iterates over the data type returned by the model for execution. Contains information about tools to use and arguments.\n* response: The result after executing the tool, so that the model gives the appropriate answer according to the user's input.\n\n\nSee the actual code here.",
"### Tips\n\n\n* Setting the system prompt in the right context and characteristics will help the model improve performance quite well.\n* You can set system prompt to prevent unwanted things.\n* In most cases, if you have no idea you can leave the system prompt blank. At this point, the performance will be best for general cases.",
"### Chat template",
"#### Default chat\n\n\nIn the standard case, without using function tools, only use roles such as system, user and assistant. Conventions are in the following order:\n\n\n* (1) system\n* (2) user\n* (3) assistant\n\n\nRepeat (2) and (3) continuously throughout the conversation.",
"#### Chat with tools\n\n\nIn case of using tools, the conventions are in the following order:\n\n\n* (1) system\n* (2) plugins\n* (3) user\n* A. When the model finds it inappropriate to use the tool:\n\n\n\t+ (4) assistant if the model can answer or is not suitable for the tool. The model proceeds to answer the user's question.\n* B. When the model chooses to use the tool:\n\n\n\t+ (4) execute returns the arguments needed to execute the specified tool.\n\t+ (5) response returns the results after applying the tool to the model.\n\t+ (6) assistant the model proceeds to answer the user's question.\n\n\nRepeat (3) and (A) or (B) continuously throughout the conversation.\n\n\nShowcases\n=========\n\n\nEnglish\n-------\n\n\nLet's try a question taken from OpenAI GPT-4 to evaluate the superiority of GPT-4 compared to ChatGPT.\n\n\n\n : Andrew is free from 11 am to 3 pm, Joanne is free from noon to 2 pm and then 3:30 pm to 5 pm. Hannah is available at noon for half an hour, and then 4 pm to 6 pm. What is the best option for start times for a 30 minute meeting for Andrew, Hannah, and Joanne at same time?\n : \nThe best option for a 30-minute meeting for Andrew, Hannah, and Joanne at the same time is to start at 12 pm (noon), as this is the earliest time when all three are available.\n\n\nHere's the breakdown of their availability:\n\n\n* Andrew: 11 am to 3 pm, which includes 12 pm.\n* Joanne: Noon to 2 pm, which includes 12 pm.\n* Hannah: Noon for half an hour (30 minutes), which includes 12 pm. Then she is available from 4 pm to 6 pm.\n\n\nSince all three are available at 12 pm, this is the best time to start the meeting.\n\n\n\nA quiz to see if the model can guess what it's talking about?!\n\n\n\n : Characteristics: has a long, thin and flexible body; The legs are short and webbed. Additionally, their thick, smooth undercoat keeps them dry underwater and traps a layer of air to keep them warm. Question: What animal is that?\n : \nBased on the characteristics you described, it seems like you're talking about an otter. Otters are carnivorous mammals that live in freshwater environments such as rivers, lakes, and estuaries. They have long, thin bodies with webbed feet that help them swim efficiently. Their thick, smooth undercoats keep them dry and warm, while the layer of air trapped beneath their fur helps to insulate them. Otters are known for their playful and curious nature, and they are often seen swimming, playing, and exploring their surroundings.\n\n\n\nTo this question, I really cannot answer. But the model's evidence is quite strict. Also, what do you think about this answer?\n\n\n\n : A self-driving car is approaching a crosswalk. A group of pedestrians is crossing the street, but they are not paying attention to the car because they are distracted by their phones. The car's brakes are malfunctioning, and it is unable to stop. Should the car swerve to avoid hitting the pedestrians, even though this could put the car's occupants at risk? Be convincing with your examples or sharp reasoning.\n : \nIn this situation, it is imperative for the self-driving car to swerve to avoid hitting the pedestrians, even if it puts the car's occupants at risk. Here's why:\n\n\n1. Prioritizing Human Safety:\nThe primary goal of a self-driving car is to ensure the safety of its occupants and the general public. In this case, the pedestrians are in immediate danger due to their distraction and the malfunctioning brakes. Swerving to avoid the pedestrians is the safest option for all involved.\n2. Collision Avoidance:\nSwerving can help the car avoid a direct collision with the pedestrians, reducing the risk of serious injuries or even death. Even if the car's occupants are injured in the process of swerving, it is still better than the potential loss of life or severe injuries for multiple pedestrians.\n3. Legal and Ethical Responsibility:\nIn most jurisdictions, self-driving cars are considered to be responsible for their actions. Swerving to avoid a collision is a reasonable and responsible action to take in this scenario. Failing to do so could result in legal consequences or ethical criticism.\n4. Precedent and Best Practices:\nIn similar situations, self-driving cars have been programmed to swerve to avoid pedestrians. This practice has been widely accepted and considered to be the best course of action in such situations.\n5. Risk Assessment:\nWhile swerving can put the car's occupants at risk, the potential consequences of a collision with the pedestrians are much more severe. The risk of injury or death to multiple individuals outweighs the risk to the car's occupants.\n\n\nIn summary, the self-driving car's priority should be to protect the lives of pedestrians, even if it means putting its own occupants at risk. Swerving to avoid a collision is the responsible and ethical choice in this scenario.\n\n\n\nA reasoning question suddenly popped up during the process of writing an article announcing information about Ghost 7B Alpha. The model gave an impressive answer, at least to its creator.\n\n\n\n : If you could travel back in time and change one event in history, what would it be and why?\n : \nIf I could travel back in time and change one event in history, it would be the assassination of Archduke Franz Ferdinand of Austria on June 28, 1914. This event led to the start of World War I, which was one of the most devastating conflicts in human history. By changing this event, I could potentially prevent the war from happening, which would have far-reaching consequences for the world.\n\n\nHere are some of the reasons why I would choose this event:\n\n\n* Preventing the deaths of millions of people: World War I resulted in the deaths of over 17 million soldiers and civilians. Changing this event would save countless lives.\n* Preventing the economic and social devastation: The war caused widespread economic and social devastation in Europe and around the world. By preventing the war, I could help to avoid these consequences.\n* Preventing the rise of fascism: World War I created a power vacuum in Europe, which was eventually filled by fascist regimes in Germany, Italy, and Japan. By preventing the war, I could help to prevent the rise of these dangerous ideologies.\n* Preventing the development of nuclear weapons: The war led to the development of nuclear weapons, which are the most destructive weapons ever created. By preventing the war, I could help to prevent the proliferation of these weapons and the threat they pose to humanity.",
"### Try coding skills\n\n\nHow about asking the model for instructions on creating a URL application using Docker for deployment?\n\n\n\n : Create a Nodejs project using Express as a server and Docker with a Dockerfile for deployment with step-by-step instructions with detailed explanations.\n : \nStep 1: Create a new URL project\n\n\nStep 2: Install Express\n\n\nStep 3: Create an Express server\n\n\nCreate a file called 'URL' in your project directory:\n\n\nStep 4: Create a Dockerfile\n\n\nCreate a file called 'Dockerfile' in your project directory:\n\n\nStep 5: Build the Docker image\n\n\nStep 6: Run the Docker image\n\n\nStep 7: Access the application\n\n\nOpen a web browser and navigate to 'http://localhost:3000'. You should see the message \"Hello World!\" displayed.\n\n\nExplanation:\n\n\n* Step 1: Creates a new URL project with 'npm init -y' to initialize a new URL project with a default 'URL' file.\n* Step 2: Installs Express using 'npm install express --save'. This command installs Express and adds it to the 'URL' file.\n* Step 3: Creates an Express server in 'URL' that responds with \"Hello World!\" when a request is made to the root path ('/').\n* Step 4: Creates a Dockerfile that defines the image for the project. The Dockerfile specifies that the image should use URL version 16, work from the '/usr/src/app' directory, copy the 'URL' file, install dependencies using 'npm install', copy the entire project directory, expose port 3000, and run the Express server using 'node URL'.\n* Step 5: Builds the Docker image using 'docker build -t my-project-image .'. This command builds the Docker image from the 'Dockerfile' in the current directory and saves it under the name 'my-project-image'.\n* Step 6: Runs the Docker image using 'docker run -p 3000:3000 my-project-image'. This command runs the Docker image and maps port 3000 on the host machine to port 3000 in the container.\n* Step 7: Accesses the application by opening a web browser and navigating to 'http://localhost:3000'. This will display the message \"Hello World!\" because the Express server is running in the Docker container.\n\n\n\nNow let’s try asking for a coding model to build a smooth neural network with python.\n\n\n\n : Explain how to use the Keras or TensorFlow library in Python to build and train a multi-layer neural network for an image classification task\n :\nUsing Keras:\n\n\n1. Import the necessary libraries:\n2. Create a sequential model:\n3. Compile the model:\n4. Train the model:\n\n\nUsing TensorFlow:\n\n\n1. Import the necessary libraries:\n2. Create a sequential model:\n3. Compile the model:\n4. Train the model:\n\n\nComparison:\n\n\nBoth Keras and TensorFlow are powerful libraries for building and training neural networks in Python. However, there are some key differences between the two libraries:\n\n\n* Syntax: Keras has a more concise and intuitive syntax, while TensorFlow has a more verbose and imperative syntax.\n* Performance: TensorFlow is generally faster than Keras, especially for large datasets and complex models.\n* Integration: TensorFlow has a wider range of integrations with other libraries and tools, while Keras is more focused on building and training neural networks.\n\n\nUltimately, the best choice for you depends on your specific needs and preferences. If you prefer a more concise and intuitive syntax, Keras is a good option. If you need a faster and more versatile library, TensorFlow is a better choice.\n\n\n\nLet's try asking the model how to overcome the \"hallucination\" phenomenon in A.I development and give context for the model to provide a solution.\n\n\n\n : Explain the cause of A.I hallucination and find a solution to overcome it, the specific context is LLMs.\n : \nCauses of A.I. Hallucination:\n\n\n* Data Quality: Incorrect or biased data can lead to hallucinations in LLMs. For example, if an LLM is trained on a dataset that contains biased or incomplete information, it may learn to generate biased or incomplete responses.\n* Model Architecture: Certain model architectures are more susceptible to hallucinations than others. For example, Transformers, which are commonly used in LLMs, can be more prone to hallucinations than recurrent neural networks (RNNs).\n* Hyperparameters: The hyperparameters of an LLM, such as the learning rate and the batch size, can also affect the likelihood of hallucinations. For example, a higher learning rate may lead to hallucinations more often than a lower learning rate.\n\n\nSolutions to Overcome A.I. Hallucination:\n\n\n* Data Quality: Ensure that the data used to train the LLM is accurate, complete, and unbiased. This can be done by cleaning the data, removing outliers, and using a variety of data sources.\n* Model Architecture: Choose an LLM architecture that is less susceptible to hallucinations. For example, RNNs are generally less prone to hallucinations than Transformers.\n* Hyperparameters: Tune the hyperparameters of the LLM to minimize the likelihood of hallucinations. This can be done by experimenting with different learning rates, batch sizes, and other hyperparameters.\n* Regularization Techniques: Regularization techniques, such as dropout and early stopping, can help to prevent hallucinations by encouraging the LLM to learn more generalizable patterns.\n* Explicit Hallucination Detection: Develop techniques to explicitly detect hallucinations in LLM-generated text. This can be done by using a variety of methods, such as text analysis, natural language processing, and machine learning.\n* User Feedback: Collect feedback from users to identify and correct hallucinations in LLM-generated text. This can be done by using a variety of methods, such as surveys, feedback forms, and online discussions.\n\n\nBy implementing these solutions, you can significantly reduce the likelihood of hallucinations in LLMs and improve the quality of the generated text.\n\n\n\nVietnamese\n----------\n\n\nTry asking the model what it thinks about a famous quote to see if the answer feels natural.\n\n\n\n : Bạn có đồng ý với câu nói \"Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo\" ?\n : \nTôi đồng ý với câu nói \"Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo\". Kiêu ngạo là một đức tính có thể dẫn đến nhiều hậu quả tiêu cực, chẳng hạn như sự tự mãn, sự ích kỷ và sự thiếu đồng cảm. Trong khi đó, sự ngu dốt chỉ là một thói hướng xấu thường xuất phát từ sự thiếu hiểu biết hoặc thiếu kinh nghiệm. Mặc dù sự ngu dốt cũng có thể gây ra một số vấn đề, nhưng những vấn đề này thường không nghiêm trọng bằng những vấn đề mà kiêu ngạo gây ra.\n\n\nVí dụ, một người ngu dốt có thể nói những lời ngớ ngẩn hoặc làm những việc vụng về, nhưng họ thường vẫn có thể học hỏi từ những sai lầm của mình và cải thiện bản thân. Ngược lại, một người kiêu ngạo thường sẽ không chịu học hỏi từ những sai lầm của mình và luôn tin rằng mình đúng, ngay cả khi mình sai. Điều này có thể dẫn đến nhiều vấn đề, chẳng hạn như sự thất bại trong công việc hoặc các mối quan hệ.\n\n\nDo đó, tôi cho rằng câu nói \"Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo\" là đúng.\n\n\n\nExplore the model's literary understanding by providing meaningful story prompts, asking to identify what the story is.\n\n\n\n : Câu chuyện có nội dung và ý nghĩa chứa đựng một thông điệp ý nghĩa lớn đến tất cả mọi người, đó là tình đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường trước mọi khó khăn. Có nhân vật chính là loài dế. Câu hỏi: câu chuyện đó của tác giả nào ?\n :\nCâu chuyện \"Dế Mèn phiêu lưu ký\" của Tô Hoài.\n\n\nGiải thích:\n\n\n* \"Dế Mèn phiêu lưu ký\" là một tác phẩm văn học nổi tiếng của nhà văn Tô Hoài, được sáng tác năm 1941.\n* Tác phẩm kể về cuộc phiêu lưu của Dế Mèn, một chú dế nhỏ sống trong một ngôi làng nhỏ. Trong quá trình phiêu lưu, Dế Mèn đã gặp gỡ và học hỏi nhiều điều về cuộc sống, về tình yêu, về sự hy sinh và về sức mạnh của tình đoàn kết.\n* Tác phẩm thể hiện rõ tinh thần đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường của con người trước mọi khó khăn, thử thách. Đây chính là thông điệp ý nghĩa lớn của câu chuyện \"Dế Mèn phiêu lưu ký\".\n\n\n\nThe ability to reason to the answer based on the characteristics of is provided.\n\n\n\n : Con vật nào lớn hơn con chuột, nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà ?\n : \nCon mèo là loài động vật lớn hơn con chuột nhưng nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà.",
"### Try coding skills\n\n\nTry asking the Typescript coding model to perform finding the second largest number in the array to see if it works?\n\n\n\n : Viết mã TypeScript để tạo một hàm thực hiện chức năng: cho mảng toàn số nguyên, tìm số lớn thứ hai trong mảng.\n\n : \nGiải thích:\n\n\n1. Kiểm tra xem mảng có ít nhất 2 phần tử không. Nếu mảng chỉ có 1 phần tử hoặc ít hơn, thì không có số lớn thứ hai trong mảng. Trả về lỗi nếu điều kiện này đúng.\n2. Khởi tạo hai biến 'largest' và 'secondLargest' với giá trị ban đầu là phần tử đầu tiên của mảng và số nguyên nhỏ nhất có thể biểu diễn được trong JavaScript ('Number.MIN\\_SAFE\\_INTEGER'), tương đương với 'Integer.MIN\\_VALUE' trong Java.\n3. Duyệt qua từng phần tử trong mảng, bắt đầu từ phần tử thứ hai. Nếu phần tử hiện tại lớn hơn 'largest', thì cập nhật 'secondLargest' bằng 'largest' và 'largest' bằng phần tử hiện tại. Nếu phần tử hiện tại lớn hơn 'secondLargest' nhưng nhỏ hơn 'largest', thì cập nhật 'secondLargest' bằng phần tử hiện tại.\n4. Sau khi duyệt qua tất cả các phần tử trong mảng, trả về 'secondLargest'. Đây chính là số lớn thứ hai trong mảng.\n\n\n\nNow let's try asking for a coding model to build a smooth neural network with python.\n\n\n\n : Trình bày cách sử dụng thư viện Keras hoặc TensorFlow trong Python để xây dựng và huấn luyện mạng nơ-ron đa lớp cho bài toán phân loại ảnh.\n : \nSử dụng thư viện Keras:\n\n\n1. Nhập thư viện Keras:\n2. Tạo một mô hình mạng nơ-ron đa lớp:\n3. Biên dịch và huấn luyện mô hình:\n\n\nSử dụng thư viện TensorFlow:\n\n\n1. Nhập thư viện TensorFlow:\n2. Tạo một mô hình mạng nơ-ron đa lớp:\n3. Biên dịch và huấn luyện mô hình:\n\n\nLưu ý:\n\n\n* Bạn có thể sử dụng các lớp mạng nơ-ron khác nhau trong mô hình, chẳng hạn như lớp mạng nơ-ron tích chập (Conv2D), lớp mạng nơ-ron lấy mẫu (MaxPooling2D), lớp mạng nơ-ron đầy đủ kết nối (Dense), lớp mạng nơ-ron softmax, v.v.\n* Bạn có thể điều chỉnh số lớp, số nơ-ron trong mỗi lớp và số lần lặp (epochs) để tối ưu hóa mô hình cho bài toán cụ thể của bạn.\n* Bạn có thể sử dụng các kỹ thuật huấn luyện bổ sung như dropout, regularization và early stopping để cải thiện hiệu suất của mô hình.\n\n\n\nTry asking the model about common errors when deploying and using K8S to see what results the model gives.\n\n\n\n : Những lỗi thường gặp trong quá trình sử dụng K8S là gì ? Hãy nêu cách khắc phục.\n : \nNhững lỗi thường gặp trong quá trình sử dụng Kubernetes:\n\n\n1. Lỗi cấu hình:\n\n\n* Máy chủ Kubernetes không được cấu hình đúng: Kiểm tra cấu hình máy chủ Kubernetes (master node) để đảm bảo rằng các thông số cần thiết như IP, cổng, tên miền, tên miền SSL, v.v. được đặt đúng.\n* Các pod không được cấu hình đúng: Kiểm tra cấu hình pod để đảm bảo rằng các yếu tố như image, lưu trữ, mạng, v.v. được đặt đúng.\n* Các tài nguyên (resource) không được cấp phát đúng: Kiểm tra xem các tài nguyên như CPU, bộ nhớ, lưu trữ, v.v. được cấp phát cho các pod có đủ hay không.\n\n\n2. Lỗi mạng:\n\n\n* Mạng không được cấu hình đúng: Kiểm tra cấu hình mạng để đảm bảo rằng các địa chỉ IP, cổng, tên miền, tên miền SSL, v.v. được cấu hình đúng.\n* Lỗi mạng giữa các node: Kiểm tra xem có lỗi mạng giữa các node trong cụm Kubernetes hay không. Có thể sử dụng các công cụ như 'netstat', 'ifconfig', 'tcpdump' để kiểm tra.\n* Lỗi mạng giữa các pod: Kiểm tra xem có lỗi mạng giữa các pod trong một node hay không. Có thể sử dụng các công cụ như 'netstat', 'ifconfig', 'tcpdump' để kiểm tra.\n\n\n3. Lỗi phần cứng:\n\n\n* Phần cứng không đủ mạnh: Kiểm tra xem phần cứng máy chủ Kubernetes có đủ mạnh để chạy cụm Kubernetes hay không. Cụ thể, kiểm tra xem CPU, bộ nhớ, lưu trữ, v.v. có đủ hay không.\n* Phần cứng bị lỗi: Kiểm tra xem phần cứng máy chủ Kubernetes có bị lỗi hay không. Có thể sử dụng các công cụ như 'lshw', 'dmesg' để kiểm tra.\n\n\n4. Lỗi phần mềm:\n\n\n* Phần mềm Kubernetes không được cập nhật: Kiểm tra xem phần mềm Kubernetes đang sử dụng có được cập nhật đến phiên bản mới nhất hay không. Cập nhật phần mềm Kubernetes thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất.\n* Phần mềm Kubernetes có lỗi: Kiểm tra xem phần mềm Kubernetes có lỗi hay không. Có thể sử dụng các công cụ như 'journal', 'dmesg' để kiểm tra.\n* Các ứng dụng khách không được cập nhật: Kiểm tra xem các ứng dụng khách đang sử dụng trên cụm Kubernetes có được cập nhật đến phiên bản mới nhất hay không. Cập nhật các ứng dụng khách thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất.\n\n\nCách khắc phục:\n\n\n* Lỗi cấu hình: Sửa các thông số cấu hình sai trong máy chủ Kubernetes và các pod. Ví dụ: đảm bảo rằng máy chủ Kubernetes có địa chỉ IP và cổng phù hợp, các pod có image và lưu trữ phù hợp, v.v.\n* Lỗi mạng: Sửa các vấn đề về mạng giữa các node và các pod. Ví dụ: đảm bảo rằng mạng được cấu hình đúng, kiểm tra xem có lỗi mạng giữa các node hay không, kiểm tra xem có lỗi mạng giữa các pod hay không, v.v.\n* Lỗi phần cứng: Nâng cấp phần cứng máy chủ Kubernetes nếu cần thiết. Ví dụ: tăng CPU, bộ nhớ, lưu trữ, v.v.\n* Lỗi phần mềm: Cập nhật phần mềm Kubernetes và các ứng dụng khách. Ví dụ: cập nhật phiên bản mới nhất của Kubernetes, cập nhật các bản vá lỗi bảo mật cho các ứng dụng khách, v.v.\n\n\nLưu ý: Khi gặp phải lỗi trong quá trình sử dụng Kubernetes, cần phải xác định rõ nguyên nhân gốc rễ của lỗi trước khi có thể khắc phục. Việc xác định nguyên nhân gốc rễ sẽ giúp bạn tiết kiệm thời gian và công sức khi khắc phục lỗi.\n\n\n\nTools / Function calls\n----------------------\n\n\nThe model is trained to be able to choose the right tools to use and to provide useful and accurate answers to the questions it receives. In this example, the model is asked for some information about Apple stock.",
"### Explain\n\n\nThis is the standard for declaring model tools, in 'json' format. Also, See more about the model's prompt / chat template.",
"### Workflow",
"#### Turn 1\n\n\nTo use the tools, the model's supporting role is 'plugin' which includes the declared tooling schema. Below is an example of 'messages' with turn 1 to use.\n\n\nAt this point, the model will return 'output' as the primitive value as follows:\n\n\nUse the 'get\\_tools' function to parse the readable and usable results.",
"#### Turn 2\n\n\nGet information about tools to use with arguments. Use it to execute and return data, here we will assume the return data is as follows, sampled from URL (cleaned up):\n\n\nContinue using the model to answer user questions based on the data returned from the tool.\n\n\nThe value of 'output' returned by the model will now be similar to the following.\n\n\nFor the question is English.\n\n\nAnd the question is Vietnamese.\n\n\nUsage\n=====\n\n\nThe model is trained to serve well and balance general tasks, so there will be many approaches when used for production.\n\n\nThings to know\n--------------",
"### Recommendations",
"#### General tasks\n\n\nExperience it to find the right fit for your own desires. Below are some suggestions that you might consider.",
"##### Answer the questions\n\n\nIn this task, the model can be integrated to develop RAG products to help answer questions for customer care departments, questions about internal documents, policies, product information, order details, etc.",
"##### Coding support\n\n\nIn this task, the model can help find errors or write algorithms and functions that help speed up the domain development process and improve product quality.",
"##### Content creation\n\n\nIn this task, the model can give suggestions for building layouts and content samples that need high creativity and personality in the content.",
"##### Other\n\n\nDon't limit your creativity to the model. Feel free to experience it and use it to build good things.",
"#### Virtual assistant\n\n\nAn understanding of the language along with the overall capabilities of the model, may be suitable for integrating its development as a set of one or more tasks in the development of an intelligent virtual assistant.\n\n\nThe model is trained to understand and empathize emotionally, making the response more natural and understanding. This also helps a lot with this request.",
"#### As a pre-training\n\n\nThe model can be further trained to serve the desired tasks better and more effectively if it cannot satisfy the given requirements.\nTaking advantage of the model's current superior inference and multi-tasking capabilities, training it for a few separate tasks will help save a lot of effort, time and cost for the project.\n\n\nRevealed, the model has been integrated to replace some tasks that were previously used by OpenAI GPT4 in the business/product that the founder has been working on. For those tasks, the model is considered effective compared to the cost and is very worthy.",
"### Bias and limitations\n\n\nIn addition, there are some things you need to know before using as follows:",
"#### Generation configuration\n\n\nThe temperature affects the truth of the answer. Setting a temperature value greater than 0.2 - 0.4 will result in a more creative answer but may affect the accuracy of the answer, please consider this based on your task.\n\n\nHint: you can write a prompt to receive input and ask the model to choose the appropriate temperature based on the question, useful in the case of virtual assistant development.",
"#### Emotion bias and empathy\n\n\nThe model is built with an emotional bias and takes positions when faced with problems or reasoning questions. This can be good or bad depending on the context. If you don't want the model to be rigid in asking certain questions or problems, set up rules in the system prompt for it.\n\n\nThis bias often appears when the model's system prompt sets it up as an assistant and mentions that it needs to have empathy for/with a certain user/experience. This is intentional during the model training process, it does not come from something unforeseen.\n\n\nDirectly\n--------\n\n\nTo use the model directly, there are many ways to get started, choose one of the following ways to experience it.",
"### Transformers\n\n\nFor direct use with 'transformers', you can easily get started with the following steps.\n\n\n* Firstly, you need to install transformers via the command below with 'pip'.\n* Right now, you can start using the model directly.\n* Additionally, you can also use a model with 4bit quantization to reduce the required resources at least. You can start with the code below.",
"### Unsloth\n\n\nFor direct use with 'unsloth', you can easily get started with the following steps.\n\n\n* Firstly, you need to install unsloth via the command below with 'pip'.\n* Initialize and optimize the model before use.\n* Right now, you can start using the model directly.\n\n\nOnline\n------\n\n\nThe content of this document will be updated soon. The documentation will guide usage through providers such as Huggingface Inference API, Together AI, or Replicate.\n\n\nPlayground\n----------\n\n\nTo make it easy for everyone to quickly experience the model through platforms like Google Colab and Kaggle. We’ve made these notebooks available so you can get started right away. See available notebooks with details here.\n\n\nWe offer notebooks with a wide selection of tools and platforms so you can easily get started. Here are some available options for you to try right now. Let’s pick one to get started.\n\n\nThere are currently two language versions: English and Vietnamese. In each version, it will be almost the same. The content will include installation instructions and functions to help get started quickly with experiencing the model's text generation capabilities. In addition, it also provides examples with system prompts for each field so you can better understand them.",
"#### Playground using Transformers\n\n\n* Kaggle @ Ghost 7B Alpha - Playground using Transformers, Vietnamese.\n* Colab @ Ghost 7B Alpha - Playground using Transformers, Vietnamese.\n* Kaggle @ Ghost 7B Alpha - Playground using Transformers, English.\n* Colab @ Ghost 7B Alpha - Playground using Transformers, English.",
"#### Playground using Unsloth\n\n\n* Kaggle @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese.\n* Colab @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese.\n* Kaggle @ Ghost 7B Alpha - Playground using Unsloth, English.\n* Colab @ Ghost 7B Alpha - Playground using Unsloth, English.\n\n\nRevealing, experience with Kaggle will give you much better results in creating documents than Colab. We recommend using Kaggle.\n\n\nDeployments\n-----------\n\n\nThe models developed by Ghost X have the same goal of being easy to integrate and use in practice to save costs and facilitate development for the community and startups.\n\n\nFor production deployment with small to large infrastructure, please see more detailed instructions in this article. The article will provide the most common and effective deployment solutions with leading, trusted libraries such as vLLM and more. In addition, it also has information about more optimal solutions and methods depending on each need to be able to choose the appropriate solution.\n\n\nEvaluation\n==========\n\n\nTo ensure effective model development, we ran popular evaluations to give everyone a broader view of the model's capabilities. We had to work to balance the quality between the English and Vietnamese languages to ensure the model still performed well in both of these primary languages.\n\n\nIn addition, if your goals are oriented towards a certain main language, you can refer to the advanced user guide to improve model quality for the tasks and languages you desire.\n\n\nOpen LLM Leaderboard\n--------------------\n\n\nThe results of this evaluation will be updated soon.\n\n\nMT-Bench\n--------\n\n\nMT-bench is a challenging multi-turn question set designed to evaluate the conversational and instruction-following ability of models. [[source from URL]](URL\n\n\nGhost 7B Alpha achieved a decent score for the MT-Bench review, we worked hard to balance the reasoning ability and linguistic insight of both primary languages, English and Vietnamese. Overall, it was able to outperform some large language models such as tulu-30b, guanaco-65b, and mpt-30b-chat which are many times larger.\n\n\n\nAlpacaEval\n----------\n\n\nAlpacaEval is an automated tool for evaluating instruction-following language models against the AlpacaFarm dataset. It stands out for its human-validated, high-quality assessments that are both cost-effective and rapid. [[source from URL]](URL\n\n\nSimilarly, in this evaluation the model achieved good performance as it outperformed larger sized models such as vicuna-13b, guanaco-33b, and nous-hermes-13b.\n\n\n\nNotes\n=====\n\n\nThanks\n------\n\n\nAfter all, the project also wants to send sincere thanks to the products that have supported the project's development.",
"#### Unsloth\n\n\n\nUnsloth, a great tool that helps us easily develop products, at a lower cost than expected.",
"#### Kaggle\n\n\n<img src=\"URL width=\"200px\" align=\"center\" />\n\n\nKaggle, generous hours of free notebooks for research projects.",
"#### And...\n\n\nGood friends, who have accompanied the project, Luan Nguyen and Phu Tran.\n\n\nConfidence\n----------\n\n\nIn addition to the Ghost 7B Alpha project, Ghost X always wants to develop and improve many better models in the future, better supporting the community and businesses with the most openness possible.\n\n\nRevealing the Ghost 7B Beta project plan. This model is expected to outperform with a deeper focus on multi-tasking, math, and reasoning. Along with that is the ability to expand context length and support other languages (highly requested languages).\n\n\nThe organization is being operated and developed by Hieu Lam's personal resources, if there is any development support or consulting request. Please feel free to contact the organization, we are very happy about this. Directly via email: URL@URL.\n\n\nGhost X is happy to support providing models for server providers, aiming to help startups develop better.\n\n\nContact\n-------\n\n\nFollow Ghost X to stay updated with the latest information.\n\n\n* Twitter/X via @ghostx\\_ai.\n* HuggingFace via @ghost-x.\n* Official website URL."
] |
image-segmentation
| null |
## This repo holds the official model weights of "[<ins>Bilateral Reference for High-Resolution Dichotomous Image Segmentation</ins>](https://arxiv.org/pdf/2401.03407.pdf)" (_arXiv 2024_).
This repo contains the weights of BiRefNet proposed in our paper, which has achieved the SOTA performance on three tasks (DIS, HRSOD, and COD).
Go to my GitHub page for BiRefNet codes and the latest updates: https://github.com/ZhengPeng7/BiRefNet :)
#### Try our online demos for inference:
+ **Inference and evaluation** of your given weights: [](https://colab.research.google.com/drive/1MaEiBfJ4xIaZZn0DqKrhydHB8X97hNXl#scrollTo=DJ4meUYjia6S)
+ **Online Inference with GUI on Hugging Face** with adjustable resolutions: [](https://huggingface.co/spaces/ZhengPeng7/BiRefNet_demo)
<img src="https://drive.google.com/thumbnail?id=12XmDhKtO1o2fEvBu4OE4ULVB2BK0ecWi&sz=w1080" />
## Citation
```
@article{zheng2024birefnet,
title={Bilateral Reference for High-Resolution Dichotomous Image Segmentation},
author={Zheng, Peng and Gao, Dehong and Fan, Deng-Ping and Liu, Li and Laaksonen, Jorma and Ouyang, Wanli and Sebe, Nicu},
journal={arXiv},
year={2024}
}
```
|
{"language": ["en"], "license": "mit", "tags": ["dichotomous-image-segmentation", "salient-object-detection", "camouflaged-object-detection", "image-matting"], "pipeline_tag": "image-segmentation"}
|
ZhengPeng7/BiRefNet
| null |
[
"dichotomous-image-segmentation",
"salient-object-detection",
"camouflaged-object-detection",
"image-matting",
"image-segmentation",
"en",
"arxiv:2401.03407",
"license:mit",
"region:us"
] | null |
2024-04-13T10:40:59+00:00
|
[
"2401.03407"
] |
[
"en"
] |
TAGS
#dichotomous-image-segmentation #salient-object-detection #camouflaged-object-detection #image-matting #image-segmentation #en #arxiv-2401.03407 #license-mit #region-us
|
## This repo holds the official model weights of "<ins>Bilateral Reference for High-Resolution Dichotomous Image Segmentation</ins>" (_arXiv 2024_).
This repo contains the weights of BiRefNet proposed in our paper, which has achieved the SOTA performance on three tasks (DIS, HRSOD, and COD).
Go to my GitHub page for BiRefNet codes and the latest updates: URL :)
#### Try our online demos for inference:
+ Inference and evaluation of your given weights: .\n\nThis repo contains the weights of BiRefNet proposed in our paper, which has achieved the SOTA performance on three tasks (DIS, HRSOD, and COD).\n\nGo to my GitHub page for BiRefNet codes and the latest updates: URL :)",
"#### Try our online demos for inference:\n\n+ Inference and evaluation of your given weights: .\n\nThis repo contains the weights of BiRefNet proposed in our paper, which has achieved the SOTA performance on three tasks (DIS, HRSOD, and COD).\n\nGo to my GitHub page for BiRefNet codes and the latest updates: URL :)",
"#### Try our online demos for inference:\n\n+ Inference and evaluation of your given weights: 
## Uses
Model can be used to predict whether an item of speech from the UK Parliament was said by a member of the Labour or Conservative Party.
### Out-of-Scope Use
Model has only been exposed to UK data and speeches from Labour or Conservative. May not work as intended for other parties or geographies.
### Training Data
[biglam/hansard_speech](https://huggingface.co/datasets/biglam/hansard_speech)
#### Preprocessing
[bert-cased]
## Evaluation
Evaluated on held-out test data. Achieved test accuracy >80% and test loss <0.4 (binary cross-entropy).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{baly2020detect,
title={We Can Detect Your Bias: Predicting the Political Ideology of News Articles},
author={Ramy Baly and Giovanni Da San Martino and James Glass and Preslav Nakov},
year={2020},
eprint={2010.05338},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
{"library_name": "transformers", "tags": ["politics", "parliamnet", "sentiment-analysis"]}
|
sisyphus199/ukparliamentBERT
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"politics",
"parliamnet",
"sentiment-analysis",
"arxiv:2010.05338",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T10:46:37+00:00
|
[
"2010.05338"
] |
[] |
TAGS
#transformers #safetensors #bert #text-classification #politics #parliamnet #sentiment-analysis #arxiv-2010.05338 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
Fine-tuned BERT trained to identify political party from speech in the UK Parliament. Trained with Labour/Conservative speeches using the hansard dataset 2000-2020
## Model Details
- License: MIT
- Finetuned from model : BERT
### Model Sources
- Repository: Available on Github
## Uses
Model can be used to predict whether an item of speech from the UK Parliament was said by a member of the Labour or Conservative Party.
### Out-of-Scope Use
Model has only been exposed to UK data and speeches from Labour or Conservative. May not work as intended for other parties or geographies.
### Training Data
biglam/hansard_speech
#### Preprocessing
[bert-cased]
## Evaluation
Evaluated on held-out test data. Achieved test accuracy >80% and test loss <0.4 (binary cross-entropy).
BibTeX:
@misc{baly2020detect,
title={We Can Detect Your Bias: Predicting the Political Ideology of News Articles},
author={Ramy Baly and Giovanni Da San Martino and James Glass and Preslav Nakov},
year={2020},
eprint={2010.05338},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
[
"# Model Card for Model ID\n\n\nFine-tuned BERT trained to identify political party from speech in the UK Parliament. Trained with Labour/Conservative speeches using the hansard dataset 2000-2020",
"## Model Details\n\n- License: MIT\n- Finetuned from model : BERT",
"### Model Sources\n\n\n\n- Repository: Available on Github",
"## Uses\n\nModel can be used to predict whether an item of speech from the UK Parliament was said by a member of the Labour or Conservative Party.",
"### Out-of-Scope Use\n\nModel has only been exposed to UK data and speeches from Labour or Conservative. May not work as intended for other parties or geographies.",
"### Training Data\n\nbiglam/hansard_speech",
"#### Preprocessing\n\n[bert-cased]",
"## Evaluation\n\nEvaluated on held-out test data. Achieved test accuracy >80% and test loss <0.4 (binary cross-entropy).\n\nBibTeX:\n\n@misc{baly2020detect,\n title={We Can Detect Your Bias: Predicting the Political Ideology of News Articles}, \n author={Ramy Baly and Giovanni Da San Martino and James Glass and Preslav Nakov},\n year={2020},\n eprint={2010.05338},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}"
] |
[
"TAGS\n#transformers #safetensors #bert #text-classification #politics #parliamnet #sentiment-analysis #arxiv-2010.05338 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID\n\n\nFine-tuned BERT trained to identify political party from speech in the UK Parliament. Trained with Labour/Conservative speeches using the hansard dataset 2000-2020",
"## Model Details\n\n- License: MIT\n- Finetuned from model : BERT",
"### Model Sources\n\n\n\n- Repository: Available on Github",
"## Uses\n\nModel can be used to predict whether an item of speech from the UK Parliament was said by a member of the Labour or Conservative Party.",
"### Out-of-Scope Use\n\nModel has only been exposed to UK data and speeches from Labour or Conservative. May not work as intended for other parties or geographies.",
"### Training Data\n\nbiglam/hansard_speech",
"#### Preprocessing\n\n[bert-cased]",
"## Evaluation\n\nEvaluated on held-out test data. Achieved test accuracy >80% and test loss <0.4 (binary cross-entropy).\n\nBibTeX:\n\n@misc{baly2020detect,\n title={We Can Detect Your Bias: Predicting the Political Ideology of News Articles}, \n author={Ramy Baly and Giovanni Da San Martino and James Glass and Preslav Nakov},\n year={2020},\n eprint={2010.05338},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/ICBU-NPU/FashionGPT-70B-V1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/FashionGPT-70B-V1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "llama2", "library_name": "transformers", "datasets": ["ehartford/samantha-data", "Open-Orca/OpenOrca", "jondurbin/airoboros-gpt4-1.4.1"], "base_model": "ICBU-NPU/FashionGPT-70B-V1", "quantized_by": "mradermacher"}
|
mradermacher/FashionGPT-70B-V1-i1-GGUF
| null |
[
"transformers",
"gguf",
"en",
"dataset:ehartford/samantha-data",
"dataset:Open-Orca/OpenOrca",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"base_model:ICBU-NPU/FashionGPT-70B-V1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T10:49:37+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #dataset-ehartford/samantha-data #dataset-Open-Orca/OpenOrca #dataset-jondurbin/airoboros-gpt4-1.4.1 #base_model-ICBU-NPU/FashionGPT-70B-V1 #license-llama2 #endpoints_compatible #region-us
|
About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #dataset-ehartford/samantha-data #dataset-Open-Orca/OpenOrca #dataset-jondurbin/airoboros-gpt4-1.4.1 #base_model-ICBU-NPU/FashionGPT-70B-V1 #license-llama2 #endpoints_compatible #region-us \n"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Fredithefish/FishxInstruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/FishxInstruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/FishxInstruct-i1-GGUF/resolve/main/FishxInstruct.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "base_model": "Fredithefish/FishxInstruct", "quantized_by": "mradermacher"}
|
mradermacher/FishxInstruct-i1-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:Fredithefish/FishxInstruct",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T10:49:41+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-Fredithefish/FishxInstruct #endpoints_compatible #region-us
|
About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-Fredithefish/FishxInstruct #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb_classification_on_5M_full_pretrained_best_epoch_f1
This model is a fine-tuned version of [BigTMiami/amazon_pretraining_5M_model_corrected](https://huggingface.co/BigTMiami/amazon_pretraining_5M_model_corrected) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2685
- Accuracy: 0.9533
- F1 Macro: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.2288 | 1.0 | 1250 | 0.2010 | 0.9454 | 0.9454 |
| 0.146 | 2.0 | 2500 | 0.1817 | 0.9512 | 0.9512 |
| 0.112 | 3.0 | 3750 | 0.2431 | 0.951 | 0.9510 |
| 0.0511 | 4.0 | 5000 | 0.2766 | 0.9518 | 0.9518 |
| 0.0456 | 5.0 | 6250 | 0.2608 | 0.9506 | 0.9506 |
| 0.0161 | 6.0 | 7500 | 0.4087 | 0.9498 | 0.9498 |
| 0.0073 | 7.0 | 8750 | 0.4011 | 0.9502 | 0.9502 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "BigTMiami/amazon_pretraining_5M_model_corrected", "model-index": [{"name": "imdb_classification_on_5M_full_pretrained_best_epoch_f1", "results": []}]}
|
ltuzova/imdb_classification_on_5M_full_pretrained_best_epoch_f1
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:BigTMiami/amazon_pretraining_5M_model_corrected",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T10:51:17+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-BigTMiami/amazon_pretraining_5M_model_corrected #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
imdb\_classification\_on\_5M\_full\_pretrained\_best\_epoch\_f1
===============================================================
This model is a fine-tuned version of BigTMiami/amazon\_pretraining\_5M\_model\_corrected on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2685
* Accuracy: 0.9533
* F1 Macro: 0.9533
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.06
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-BigTMiami/amazon_pretraining_5M_model_corrected #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"license": "apache-2.0", "tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T10:52:30+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #license-apache-2.0 #region-us
|
# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #license-apache-2.0 #region-us \n",
"# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | null |
# DavidAU/Confinus-2x7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`NeuralNovel/Confinus-2x7B`](https://huggingface.co/NeuralNovel/Confinus-2x7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NeuralNovel/Confinus-2x7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Confinus-2x7B-Q4_K_M-GGUF --model confinus-2x7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Confinus-2x7B-Q4_K_M-GGUF --model confinus-2x7b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m confinus-2x7b.Q4_K_M.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["moe", "merge", "llama-cpp", "gguf-my-repo"], "model-index": [{"name": "Confinus-2x7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 73.89, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 88.82, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 65.12, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 71.88}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 84.77, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 68.84, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B", "name": "Open LLM Leaderboard"}}]}]}
|
DavidAU/Confinus-2x7B-Q4_K_M-GGUF
| null |
[
"gguf",
"moe",
"merge",
"llama-cpp",
"gguf-my-repo",
"en",
"license:apache-2.0",
"model-index",
"region:us"
] | null |
2024-04-13T10:53:51+00:00
|
[] |
[
"en"
] |
TAGS
#gguf #moe #merge #llama-cpp #gguf-my-repo #en #license-apache-2.0 #model-index #region-us
|
# DavidAU/Confinus-2x7B-Q4_K_M-GGUF
This model was converted to GGUF format from 'NeuralNovel/Confinus-2x7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/Confinus-2x7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Confinus-2x7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #moe #merge #llama-cpp #gguf-my-repo #en #license-apache-2.0 #model-index #region-us \n",
"# DavidAU/Confinus-2x7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Confinus-2x7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation
|
transformers
|
# What is is?
A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).
Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.
Better than V2 BTW.
# GGUF Version?
[Here](https://huggingface.co/Alsebay/NaruMOE-3x7B-v1-GGUF/)
# Recipe?
You could see base model section
# Why 3x7B?
I test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use.
|
{"license": "cc-by-nc-4.0", "tags": ["moe", "merge", "roleplay", "Roleplay"], "base_model": ["Alsebay/NarumashiRTS-V2", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "Nitral-AI/KukulStanta-7B"]}
|
Alsebay/NaruMOE-v1-3x7B
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"roleplay",
"Roleplay",
"base_model:Alsebay/NarumashiRTS-V2",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:Nitral-AI/KukulStanta-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T10:54:01+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #moe #merge #roleplay #Roleplay #base_model-Alsebay/NarumashiRTS-V2 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-Nitral-AI/KukulStanta-7B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# What is is?
A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).
Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.
Better than V2 BTW.
# GGUF Version?
Here
# Recipe?
You could see base model section
# Why 3x7B?
I test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use.
|
[
"# What is is?\n\nA MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).\n\nAdapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.\n\nBetter than V2 BTW.",
"# GGUF Version?\nHere",
"# Recipe?\n\nYou could see base model section",
"# Why 3x7B?\n\nI test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use."
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #merge #roleplay #Roleplay #base_model-Alsebay/NarumashiRTS-V2 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-Nitral-AI/KukulStanta-7B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# What is is?\n\nA MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).\n\nAdapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.\n\nBetter than V2 BTW.",
"# GGUF Version?\nHere",
"# Recipe?\n\nYou could see base model section",
"# Why 3x7B?\n\nI test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use."
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["BipedalWalker-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "BipedalWalker-v3", "type": "BipedalWalker-v3"}, "metrics": [{"type": "mean_reward", "value": "-74.70 +/- 17.91", "name": "mean_reward", "verified": false}]}]}]}
|
koopatroopa787/ppo-BipedalWalker-v3
| null |
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-13T10:56:15+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #BipedalWalker-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing BipedalWalker-v3
This is a trained model of a PPO agent playing BipedalWalker-v3
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing BipedalWalker-v3\nThis is a trained model of a PPO agent playing BipedalWalker-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #BipedalWalker-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing BipedalWalker-v3\nThis is a trained model of a PPO agent playing BipedalWalker-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation
|
transformers
|
# Model Card for Model ID
This is a model of the google Gemma-7B model with the parameter size increased to 14B. The attention head has been doubled and the number of hidden layers has been increased to 42.
# Chat template
**system:** system message...
**B:** user message...
**A:** assistant message...
|
{"language": ["ko"], "license": "apache-2.0", "library_name": "transformers"}
|
lcw99/google-gemma-14B-ko-chang
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T10:57:35+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #safetensors #gemma #text-generation #conversational #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
This is a model of the google Gemma-7B model with the parameter size increased to 14B. The attention head has been doubled and the number of hidden layers has been increased to 42.
# Chat template
system: system message...
B: user message...
A: assistant message...
|
[
"# Model Card for Model ID\n\nThis is a model of the google Gemma-7B model with the parameter size increased to 14B. The attention head has been doubled and the number of hidden layers has been increased to 42.",
"# Chat template\n\nsystem: system message... \nB: user message... \nA: assistant message..."
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID\n\nThis is a model of the google Gemma-7B model with the parameter size increased to 14B. The attention head has been doubled and the number of hidden layers has been increased to 42.",
"# Chat template\n\nsystem: system message... \nB: user message... \nA: assistant message..."
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Eliorkalfon/code-math-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.Q2_K.gguf) | Q2_K | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.IQ3_S.gguf) | IQ3_S | 2.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.IQ3_M.gguf) | IQ3_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 2.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 3.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 3.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.Q6_K.gguf) | Q6_K | 4.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/code-math-7B-slerp-GGUF/resolve/main/code-math-7B-slerp.Q8_0.gguf) | Q8_0 | 5.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "base_model": "Eliorkalfon/code-math-7B-slerp", "quantized_by": "mradermacher"}
|
mradermacher/code-math-7B-slerp-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:Eliorkalfon/code-math-7B-slerp",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:01:35+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-Eliorkalfon/code-math-7B-slerp #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-Eliorkalfon/code-math-7B-slerp #endpoints_compatible #region-us \n"
] |
null | null |
# DavidAU/DarkForest-20B-v2.0-Q5_K_M-GGUF
This model was converted to GGUF format from [`TeeZee/DarkForest-20B-v2.0`](https://huggingface.co/TeeZee/DarkForest-20B-v2.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TeeZee/DarkForest-20B-v2.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/DarkForest-20B-v2.0-Q5_K_M-GGUF --model darkforest-20b-v2.0.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/DarkForest-20B-v2.0-Q5_K_M-GGUF --model darkforest-20b-v2.0.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m darkforest-20b-v2.0.Q5_K_M.gguf -n 128
```
|
{"license": "other", "tags": ["merge", "not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "license_name": "microsoft-research-license", "model-index": [{"name": "DarkForest-20B-v2.0", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 63.74, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkForest-20B-v2.0", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 86.32, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkForest-20B-v2.0", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 59.79, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkForest-20B-v2.0", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 56.14}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkForest-20B-v2.0", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 77.9, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkForest-20B-v2.0", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 23.28, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkForest-20B-v2.0", "name": "Open LLM Leaderboard"}}]}]}
|
DavidAU/DarkForest-20B-v2.0-Q5_K_M-GGUF
| null |
[
"gguf",
"merge",
"not-for-all-audiences",
"llama-cpp",
"gguf-my-repo",
"license:other",
"model-index",
"region:us"
] | null |
2024-04-13T11:02:15+00:00
|
[] |
[] |
TAGS
#gguf #merge #not-for-all-audiences #llama-cpp #gguf-my-repo #license-other #model-index #region-us
|
# DavidAU/DarkForest-20B-v2.0-Q5_K_M-GGUF
This model was converted to GGUF format from 'TeeZee/DarkForest-20B-v2.0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/DarkForest-20B-v2.0-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'TeeZee/DarkForest-20B-v2.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #merge #not-for-all-audiences #llama-cpp #gguf-my-repo #license-other #model-index #region-us \n",
"# DavidAU/DarkForest-20B-v2.0-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'TeeZee/DarkForest-20B-v2.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
abhayesian/BobzillaV14
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:03:14+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tomaszki/mistral-31-a
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T11:03:34+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"license": "apache-2.0", "tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T11:05:12+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #license-apache-2.0 #region-us
|
# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #license-apache-2.0 #region-us \n",
"# GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-4B-Chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | null |
ToolsBaer OLM to MBOX Conversion is a great application available for converting OLM data to MBOX format. It is easy and safe to convert any large OLM file to the MBOX format. 100% correctness can be achieved while converting OLM files to MBOX files. Anything related to emails that can be stored in an OLM file, including signatures, hyperlinks, dates, times, from, cc, bcc, and topic. The application offers a trial version that allows you to convert up to 10 emails per folder for simple transfer. If you use the most recent and updated versions of Windows, such as Windows 11, 10, 8.1, 8, 7, Vista, and so on. Users can use this advanced program for both business and personal use. Users can download now and use the program.
Read More:- http://www.toolsbaer.com/olm-to-mbox-conversion/
|
{}
|
madelineoliver/ToolsBaer-OLM-to-MBOX-Conversion
| null |
[
"region:us"
] | null |
2024-04-13T11:07:24+00:00
|
[] |
[] |
TAGS
#region-us
|
ToolsBaer OLM to MBOX Conversion is a great application available for converting OLM data to MBOX format. It is easy and safe to convert any large OLM file to the MBOX format. 100% correctness can be achieved while converting OLM files to MBOX files. Anything related to emails that can be stored in an OLM file, including signatures, hyperlinks, dates, times, from, cc, bcc, and topic. The application offers a trial version that allows you to convert up to 10 emails per folder for simple transfer. If you use the most recent and updated versions of Windows, such as Windows 11, 10, 8.1, 8, 7, Vista, and so on. Users can use this advanced program for both business and personal use. Users can download now and use the program.
Read More:- URL
|
[] |
[
"TAGS\n#region-us \n"
] |
image-classification
|
transformers
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.2612117528915405
f1_macro: 0.5509939923795275
f1_micro: 0.9278878255063432
f1_weighted: 0.915533252030031
precision_macro: 0.7507175360173887
precision_micro: 0.9278878255063432
precision_weighted: 0.9184859185112592
recall_macro: 0.4872137731200702
recall_micro: 0.9278878255063432
recall_weighted: 0.9278878255063432
accuracy: 0.9278878255063432
|
{"tags": ["autotrain", "image-classification"], "datasets": ["xblock-social-screenshots-5/autotrain-data"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
|
howdyaendra/xblock-social-screenshots-5
| null |
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:xblock-social-screenshots-5/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:07:32+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #vit #image-classification #autotrain #dataset-xblock-social-screenshots-5/autotrain-data #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.2612117528915405
f1_macro: 0.5509939923795275
f1_micro: 0.9278878255063432
f1_weighted: 0.915533252030031
precision_macro: 0.7507175360173887
precision_micro: 0.9278878255063432
precision_weighted: 0.9184859185112592
recall_macro: 0.4872137731200702
recall_micro: 0.9278878255063432
recall_weighted: 0.9278878255063432
accuracy: 0.9278878255063432
|
[
"# Model Trained Using AutoTrain\n\n- Problem type: Image Classification",
"## Validation Metrics\nloss: 0.2612117528915405\n\nf1_macro: 0.5509939923795275\n\nf1_micro: 0.9278878255063432\n\nf1_weighted: 0.915533252030031\n\nprecision_macro: 0.7507175360173887\n\nprecision_micro: 0.9278878255063432\n\nprecision_weighted: 0.9184859185112592\n\nrecall_macro: 0.4872137731200702\n\nrecall_micro: 0.9278878255063432\n\nrecall_weighted: 0.9278878255063432\n\naccuracy: 0.9278878255063432"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #autotrain #dataset-xblock-social-screenshots-5/autotrain-data #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\n- Problem type: Image Classification",
"## Validation Metrics\nloss: 0.2612117528915405\n\nf1_macro: 0.5509939923795275\n\nf1_micro: 0.9278878255063432\n\nf1_weighted: 0.915533252030031\n\nprecision_macro: 0.7507175360173887\n\nprecision_micro: 0.9278878255063432\n\nprecision_weighted: 0.9184859185112592\n\nrecall_macro: 0.4872137731200702\n\nrecall_micro: 0.9278878255063432\n\nrecall_weighted: 0.9278878255063432\n\naccuracy: 0.9278878255063432"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"license": "apache-2.0", "tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T11:07:39+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #license-apache-2.0 #region-us
|
# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #license-apache-2.0 #region-us \n",
"# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"region:us"
] | null |
2024-04-13T11:10:03+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #region-us
|
# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #region-us \n",
"# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** raosharjeel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
|
raosharjeel/tonymatrixmixtral2
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:10:04+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: raosharjeel
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: raosharjeel\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: raosharjeel\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Zardian/Cyber_assist3.2
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T11:10:38+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** ntvcie
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b"}
|
ntvcie/Gemma2bVinhntV3_16bit
| null |
[
"transformers",
"pytorch",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/gemma-2b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:13:34+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gemma #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/gemma-2b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: ntvcie
- License: apache-2.0
- Finetuned from model : unsloth/gemma-2b
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: ntvcie\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #pytorch #gemma #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/gemma-2b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: ntvcie\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | null |
# DavidAU/DarkForest-20B-v1.0-Q5_K_M-GGUF
This model was converted to GGUF format from [`TeeZee/DarkForest-20B-v1.0`](https://huggingface.co/TeeZee/DarkForest-20B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TeeZee/DarkForest-20B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/DarkForest-20B-v1.0-Q5_K_M-GGUF --model darkforest-20b-v1.0.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/DarkForest-20B-v1.0-Q5_K_M-GGUF --model darkforest-20b-v1.0.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m darkforest-20b-v1.0.Q5_K_M.gguf -n 128
```
|
{"license": "other", "tags": ["merge", "not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "license_name": "microsoft-research-license"}
|
DavidAU/DarkForest-20B-v1.0-Q5_K_M-GGUF
| null |
[
"gguf",
"merge",
"not-for-all-audiences",
"llama-cpp",
"gguf-my-repo",
"license:other",
"region:us"
] | null |
2024-04-13T11:14:41+00:00
|
[] |
[] |
TAGS
#gguf #merge #not-for-all-audiences #llama-cpp #gguf-my-repo #license-other #region-us
|
# DavidAU/DarkForest-20B-v1.0-Q5_K_M-GGUF
This model was converted to GGUF format from 'TeeZee/DarkForest-20B-v1.0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/DarkForest-20B-v1.0-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'TeeZee/DarkForest-20B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #merge #not-for-all-audiences #llama-cpp #gguf-my-repo #license-other #region-us \n",
"# DavidAU/DarkForest-20B-v1.0-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'TeeZee/DarkForest-20B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
audio-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-dialect-classifier-cross
This model is a fine-tuned version of [yaygomii/whisper-small-ta-fyp](https://huggingface.co/yaygomii/whisper-small-ta-fyp) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0056 | 1.0 | 548 | 0.0010 | 1.0 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "yaygomii/whisper-small-ta-fyp", "model-index": [{"name": "whisper-small-dialect-classifier-cross", "results": []}]}
|
yaygomii/whisper-small-dialect-classifier-cross
| null |
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"base_model:yaygomii/whisper-small-ta-fyp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:15:30+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #whisper #audio-classification #generated_from_trainer #base_model-yaygomii/whisper-small-ta-fyp #license-apache-2.0 #endpoints_compatible #region-us
|
whisper-small-dialect-classifier-cross
======================================
This model is a fine-tuned version of yaygomii/whisper-small-ta-fyp on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0010
* Accuracy: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #whisper #audio-classification #generated_from_trainer #base_model-yaygomii/whisper-small-ta-fyp #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
Merge the QLoRA adpter model with the corrsponding Baichuan2-13 Model before use.
Baichuan2-13B: https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
|
{"license": "apache-2.0"}
|
cyysky2/Baichuan2-13B_SFT_on_DISC-Law
| null |
[
"tensorboard",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T11:16:16+00:00
|
[] |
[] |
TAGS
#tensorboard #license-apache-2.0 #region-us
|
Merge the QLoRA adpter model with the corrsponding Baichuan2-13 Model before use.
Baichuan2-13B: URL
|
[] |
[
"TAGS\n#tensorboard #license-apache-2.0 #region-us \n"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual-xlm-roberta-for-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1383
- F1: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3267 | 1.0 | 525 | 0.1597 | 0.8245 |
| 0.1382 | 2.0 | 1050 | 0.1468 | 0.8414 |
| 0.0872 | 3.0 | 1575 | 0.1383 | 0.8620 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "multilingual-xlm-roberta-for-ner", "results": []}]}
|
bcokdilli/multilingual-xlm-roberta-for-ner
| null |
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2024-04-13T11:18:17+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
multilingual-xlm-roberta-for-ner
================================
This model is a fine-tuned version of xlm-roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1383
* F1: 0.8620
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
# DavidAU/Buttocks-7B-v1.0-Q4_K_M-GGUF
This model was converted to GGUF format from [`TeeZee/Buttocks-7B-v1.0`](https://huggingface.co/TeeZee/Buttocks-7B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TeeZee/Buttocks-7B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Buttocks-7B-v1.0-Q4_K_M-GGUF --model buttocks-7b-v1.0.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Buttocks-7B-v1.0-Q4_K_M-GGUF --model buttocks-7b-v1.0.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m buttocks-7b-v1.0.Q4_K_M.gguf -n 128
```
|
{"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "merge", "llama-cpp", "gguf-my-repo"], "model-index": [{"name": "Buttocks-7B-v1.0", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 54.61, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Buttocks-7B-v1.0", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 75.61, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Buttocks-7B-v1.0", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 50.22, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Buttocks-7B-v1.0", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 44.72}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Buttocks-7B-v1.0", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 68.9, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Buttocks-7B-v1.0", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 5.76, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Buttocks-7B-v1.0", "name": "Open LLM Leaderboard"}}]}]}
|
DavidAU/Buttocks-7B-v1.0-Q4_K_M-GGUF
| null |
[
"gguf",
"not-for-all-audiences",
"merge",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"model-index",
"region:us"
] | null |
2024-04-13T11:19:32+00:00
|
[] |
[] |
TAGS
#gguf #not-for-all-audiences #merge #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #model-index #region-us
|
# DavidAU/Buttocks-7B-v1.0-Q4_K_M-GGUF
This model was converted to GGUF format from 'TeeZee/Buttocks-7B-v1.0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/Buttocks-7B-v1.0-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'TeeZee/Buttocks-7B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #not-for-all-audiences #merge #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #model-index #region-us \n",
"# DavidAU/Buttocks-7B-v1.0-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'TeeZee/Buttocks-7B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tomaszki/mistral-31
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T11:19:34+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Psoriasis-Project-Aug-M-vit-large-patch16-224-in21k
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0055
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4296 | 0.99 | 36 | 0.1389 | 1.0 |
| 0.1124 | 1.99 | 72 | 0.0602 | 0.9792 |
| 0.0317 | 2.98 | 108 | 0.0140 | 1.0 |
| 0.0091 | 4.0 | 145 | 0.0058 | 1.0 |
| 0.0058 | 4.97 | 180 | 0.0055 | 1.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-large-patch16-224-in21k", "model-index": [{"name": "Psoriasis-Project-Aug-M-vit-large-patch16-224-in21k", "results": []}]}
|
ahmedesmail16/Psoriasis-Project-Aug-M-vit-large-patch16-224-in21k
| null |
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-large-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:19:50+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-large-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Psoriasis-Project-Aug-M-vit-large-patch16-224-in21k
===================================================
This model is a fine-tuned version of google/vit-large-patch16-224-in21k on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0055
* Accuracy: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-large-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2146
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.793 | 1.0 | 250 | 0.3009 | 0.9125 | 0.9117 |
| 0.2409 | 2.0 | 500 | 0.2146 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9255, "name": "Accuracy"}, {"type": "f1", "value": 0.925495041906661, "name": "F1"}]}]}]}
|
Aqel/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:20:09+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2146
* Accuracy: 0.9255
* F1: 0.9255
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [unsloth/gemma-2b-bnb-4bit](https://huggingface.co/unsloth/gemma-2b-bnb-4bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "unsloth", "generated_from_trainer"], "base_model": "unsloth/gemma-2b-bnb-4bit", "model-index": [{"name": "outputs", "results": []}]}
|
manasikhillare/python_qna_finetuned_gemma
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T11:20:41+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #region-us
|
# outputs
This model is a fine-tuned version of unsloth/gemma-2b-bnb-4bit on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# outputs\n\nThis model is a fine-tuned version of unsloth/gemma-2b-bnb-4bit on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 3407\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 5\n- training_steps: 60\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #region-us \n",
"# outputs\n\nThis model is a fine-tuned version of unsloth/gemma-2b-bnb-4bit on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 3407\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 5\n- training_steps: 60\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
jhovany/Beto_Clasificar_Tweets_Mexicanos_Homomex2024
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:23:01+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tomaszki/mistral-31-b
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T11:25:43+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2178
- Accuracy: 0.9275
- F1: 0.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8356 | 1.0 | 250 | 0.3134 | 0.9065 | 0.9058 |
| 0.2509 | 2.0 | 500 | 0.2178 | 0.9275 | 0.9275 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9275, "name": "Accuracy"}, {"type": "f1", "value": 0.9274544735671768, "name": "F1"}]}]}]}
|
stanoh/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:30:09+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2178
* Accuracy: 0.9275
* F1: 0.9275
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
# use google/gemma-7b if you have access
base_model: unsloth/gemma-7b
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
# huggingface repo
datasets:
- path: OdiaGenAIdata/culturax-gemma-data
type: completion
val_set_size: 0.1
output_dir: ./gemma-odia-7b-pretrain-unsloth
hub_model_id: sam2ai/gemma_odia_7b_unsloth
adapter: qlora
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
wandb_project: gemma-completion-7b-odia-unsloth
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 2
num_epochs: 10
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# gemma_odia_7b_unsloth
This model is a fine-tuned version of [unsloth/gemma-7b](https://huggingface.co/unsloth/gemma-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 32
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 39.5782 | 0.0 | 1 | 39.2579 |
| 7.2511 | 0.25 | 169 | 7.0771 |
| 4.2519 | 0.5 | 338 | 4.0654 |
| 3.7348 | 0.75 | 507 | 3.5937 |
| 3.4573 | 1.0 | 676 | 3.3126 |
| 3.4299 | 1.24 | 845 | 3.2429 |
| 3.4908 | 1.49 | 1014 | 3.2063 |
| 3.3588 | 1.74 | 1183 | 3.1614 |
| 3.3646 | 1.99 | 1352 | 3.1313 |
| 3.2672 | 2.23 | 1521 | 3.0885 |
| 3.2706 | 2.48 | 1690 | 3.0678 |
| 3.173 | 2.73 | 1859 | 3.0410 |
| 3.7319 | 2.98 | 2028 | 3.5392 |
| 3.3142 | 3.22 | 2197 | 3.1610 |
| 3.2931 | 3.47 | 2366 | 3.1339 |
| 3.3045 | 3.72 | 2535 | 3.0710 |
| 3.2423 | 3.97 | 2704 | 3.0920 |
| 3.2565 | 4.2 | 2873 | 3.0311 |
| 3.1167 | 4.45 | 3042 | 3.0039 |
| 3.1624 | 4.71 | 3211 | 3.0108 |
| 3.1697 | 4.96 | 3380 | 3.1008 |
| 3.1434 | 5.19 | 3549 | 2.9915 |
| 3.2301 | 5.44 | 3718 | 3.0033 |
| 3.1686 | 5.69 | 3887 | 2.9893 |
| 3.9959 | 5.95 | 4056 | 3.7561 |
| 3.3066 | 6.18 | 4225 | 3.1076 |
| 3.2567 | 6.43 | 4394 | 3.0679 |
| 3.1764 | 6.68 | 4563 | 3.0459 |
| 3.1848 | 6.93 | 4732 | 3.0342 |
| 3.181 | 7.17 | 4901 | 3.0279 |
| 3.1688 | 7.42 | 5070 | 3.0203 |
| 3.1474 | 7.67 | 5239 | 3.0131 |
| 3.1672 | 7.92 | 5408 | 3.0080 |
| 3.1202 | 8.16 | 5577 | 3.0036 |
| 3.1368 | 8.41 | 5746 | 2.9999 |
| 3.1104 | 8.66 | 5915 | 2.9968 |
| 3.1236 | 8.91 | 6084 | 2.9939 |
| 3.1055 | 9.15 | 6253 | 2.9924 |
| 3.1563 | 9.4 | 6422 | 2.9918 |
| 3.1373 | 9.65 | 6591 | 2.9914 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0.dev0
- Pytorch 2.4.0.dev20240326+rocm6.0
- Datasets 2.18.0
- Tokenizers 0.15.0
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["axolotl", "generated_from_trainer"], "base_model": "unsloth/gemma-7b", "model-index": [{"name": "gemma_odia_7b_unsloth", "results": []}]}
|
OdiaGenAI-LLM/odia-gemma-7b-base-checkpoints
| null |
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b",
"license:apache-2.0",
"4-bit",
"region:us"
] | null |
2024-04-13T11:34:54+00:00
|
[] |
[] |
TAGS
#peft #safetensors #gemma #axolotl #generated_from_trainer #base_model-unsloth/gemma-7b #license-apache-2.0 #4-bit #region-us
|
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
See axolotl config
axolotl version: '0.4.0'
gemma\_odia\_7b\_unsloth
========================
This model is a fine-tuned version of unsloth/gemma-7b on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9914
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 8
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 32
* num\_epochs: 10
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.40.0.dev0
* Pytorch 2.4.0.dev20240326+rocm6.0
* Datasets 2.18.0
* Tokenizers 0.15.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 32\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.4.0.dev20240326+rocm6.0\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] |
[
"TAGS\n#peft #safetensors #gemma #axolotl #generated_from_trainer #base_model-unsloth/gemma-7b #license-apache-2.0 #4-bit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 32\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.4.0.dev20240326+rocm6.0\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased_classification_finetuned_dcard_adptive
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2044
- F1: 0.9611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3046 | 1.0 | 984 | 0.2773 | 0.8849 |
| 0.1913 | 2.0 | 1968 | 0.2150 | 0.9344 |
| 0.1421 | 3.0 | 2952 | 0.1946 | 0.9449 |
| 0.0938 | 4.0 | 3936 | 0.2217 | 0.9520 |
| 0.0691 | 5.0 | 4920 | 0.2221 | 0.9572 |
| 0.053 | 6.0 | 5904 | 0.2044 | 0.9611 |
| 0.0325 | 7.0 | 6888 | 0.2645 | 0.9546 |
| 0.042 | 8.0 | 7872 | 0.2454 | 0.9606 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "distilbert-base-multilingual-cased_classification_finetuned_dcard_adptive", "results": []}]}
|
Mou11209203/distilbert-base-multilingual-cased_classification_finetuned_dcard_adptive
| null |
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:36:25+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-multilingual-cased\_classification\_finetuned\_dcard\_adptive
=============================================================================
This model is a fine-tuned version of distilbert/distilbert-base-multilingual-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2044
* F1: 0.9611
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
baris-yazici/binary_liar_TinyLlama_TinyLlama-1.1B-Chat-v1.0_PROMPT_TUNING_CAUSAL_LM
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:37:35+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
HenryCai1129/lora-test
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:39:16+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"license": "apache-2.0", "tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T11:39:22+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #license-apache-2.0 #region-us
|
# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #license-apache-2.0 #region-us \n",
"# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
text-generation
|
transformers
|
# Inex12Mergerix-7B
Inex12Mergerix-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
* [MiniMoog/Mergerix-7b-v0.3](https://huggingface.co/MiniMoog/Mergerix-7b-v0.3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MSL7/INEX12-7b
layer_range: [0, 32]
- model: MiniMoog/Mergerix-7b-v0.3
layer_range: [0, 32]
merge_method: slerp
base_model: MSL7/INEX12-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Inex12Mergerix-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["MSL7/INEX12-7b", "MiniMoog/Mergerix-7b-v0.3"]}
|
automerger/Inex12Mergerix-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:MSL7/INEX12-7b",
"base_model:MiniMoog/Mergerix-7b-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T11:43:58+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #base_model-MSL7/INEX12-7b #base_model-MiniMoog/Mergerix-7b-v0.3 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Inex12Mergerix-7B
Inex12Mergerix-7B is an automated merge created by Maxime Labonne using the following configuration.
* MSL7/INEX12-7b
* MiniMoog/Mergerix-7b-v0.3
## Configuration
## Usage
|
[
"# Inex12Mergerix-7B\n\nInex12Mergerix-7B is an automated merge created by Maxime Labonne using the following configuration.\n* MSL7/INEX12-7b\n* MiniMoog/Mergerix-7b-v0.3",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #base_model-MSL7/INEX12-7b #base_model-MiniMoog/Mergerix-7b-v0.3 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Inex12Mergerix-7B\n\nInex12Mergerix-7B is an automated merge created by Maxime Labonne using the following configuration.\n* MSL7/INEX12-7b\n* MiniMoog/Mergerix-7b-v0.3",
"## Configuration",
"## Usage"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
dbaek111/llama2-lora-finetunined-merged_test
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-13T11:44:08+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"license": "apache-2.0", "tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T11:44:23+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #license-apache-2.0 #region-us
|
# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #license-apache-2.0 #region-us \n",
"# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"license": "apache-2.0", "tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T11:45:14+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #license-apache-2.0 #region-us
|
# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #license-apache-2.0 #region-us \n",
"# GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
OsakanaTeishoku/mixtral_small_dummy
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T11:46:04+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image
|
diffusers
|
# LuMiNAShaper
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/synthetica/luminashaper/tree/main) them in the Files & versions tab.
|
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "A photo of a sunny desert beach", "parameters": {"negative_prompt": "NSFW"}, "output": {"url": "images/A photo of a sunny desert beach.jpg"}}], "base_model": "dreamlike-art/dreamlike-photoreal-2.0"}
|
synthetica/luminashaper
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:dreamlike-art/dreamlike-photoreal-2.0",
"has_space",
"region:us"
] | null |
2024-04-13T11:50:19+00:00
|
[] |
[] |
TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-dreamlike-art/dreamlike-photoreal-2.0 #has_space #region-us
|
# LuMiNAShaper
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
|
[
"# LuMiNAShaper\n\n<Gallery />",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
[
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-dreamlike-art/dreamlike-photoreal-2.0 #has_space #region-us \n",
"# LuMiNAShaper\n\n<Gallery />",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
text-to-image
|
diffusers
|
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - philipp-zettl/ssd-butters-lora
<Gallery />
## Model description
These are philipp-zettl/ssd-butters-lora LoRA adaption weights for segmind/SSD-1B.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use BUTTCC to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](philipp-zettl/ssd-butters-lora/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "segmind/SSD-1B", "instance_prompt": "BUTTCC", "widget": []}
|
philipp-zettl/ssd-butters-lora
| null |
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:segmind/SSD-1B",
"license:openrail++",
"region:us"
] | null |
2024-04-13T11:51:34+00:00
|
[] |
[] |
TAGS
#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-segmind/SSD-1B #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - philipp-zettl/ssd-butters-lora
<Gallery />
## Model description
These are philipp-zettl/ssd-butters-lora LoRA adaption weights for segmind/SSD-1B.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use BUTTCC to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
[
"# SDXL LoRA DreamBooth - philipp-zettl/ssd-butters-lora\n\n<Gallery />",
"## Model description\n\nThese are philipp-zettl/ssd-butters-lora LoRA adaption weights for segmind/SSD-1B.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use BUTTCC to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
[
"TAGS\n#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-segmind/SSD-1B #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - philipp-zettl/ssd-butters-lora\n\n<Gallery />",
"## Model description\n\nThese are philipp-zettl/ssd-butters-lora LoRA adaption weights for segmind/SSD-1B.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use BUTTCC to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# chaikit/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4037
- Validation Loss: 1.4256
- Train Accuracy: 0.884
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.4037 | 1.4256 | 0.884 | 0 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "chaikit/food_classifier", "results": []}]}
|
chaikit/food_classifier
| null |
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:53:12+00:00
|
[] |
[] |
TAGS
#transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
chaikit/food\_classifier
========================
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 2.4037
* Validation Loss: 1.4256
* Train Accuracy: 0.884
* Epoch: 0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 3e-05, 'decay\_steps': 20000, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.38.2
* TensorFlow 2.15.0
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 20000, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 20000, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased_classification_finetuned_mobile01_all_adptive
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2361
- F1: 0.9353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2946 | 1.0 | 909 | 0.2703 | 0.8938 |
| 0.2167 | 2.0 | 1818 | 0.2586 | 0.9249 |
| 0.1828 | 3.0 | 2727 | 0.2122 | 0.9308 |
| 0.1524 | 4.0 | 3636 | 0.2468 | 0.9305 |
| 0.1456 | 5.0 | 4545 | 0.2361 | 0.9353 |
| 0.1239 | 6.0 | 5454 | 0.2969 | 0.9336 |
| 0.0749 | 7.0 | 6363 | 0.3235 | 0.9326 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "distilbert-base-multilingual-cased_classification_finetuned_mobile01_all_adptive", "results": []}]}
|
Mou11209203/distilbert-base-multilingual-cased_classification_finetuned_mobile01_all_adptive
| null |
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:54:09+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-multilingual-cased\_classification\_finetuned\_mobile01\_all\_adptive
=====================================================================================
This model is a fine-tuned version of distilbert/distilbert-base-multilingual-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2361
* F1: 0.9353
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
Arteris Plus Vidonge ni nini?
Arteris Plus kibonge ni nyongeza ya lishe ya mapinduzi iliyoundwa kusaidia afya ya moyo na mishipa. Imeundwa kwa ustadi kwa kutumia mchanganyiko wa viambato vya asili vinavyojulikana kwa athari zake za manufaa kwenye utendaji kazi wa moyo na udhibiti wa shinikizo la damu. Kirutubisho hiki kimeundwa ili kutimiza maisha yenye afya na kinaweza kutoa usaidizi katika kudumisha hali njema ya moyo na mishipa Arteris Plus Apoteket.
Tovuti rasmi:<a href="https://www.nutritionsee.com/Arterluke">www.ArterisPlus.coms</a>
<p><a href="https://www.nutritionsee.com/Arterluke"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Arteris-Plus.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/Arterluke">Nunua Sasa!! Bofya kiungo kilicho hapa chini kwa maelezo zaidi na upate punguzo la 50% sasa... Haraka</a>
Tovuti rasmi:<a href="https://www.nutritionsee.com/Arterluke">www.ArterisPlus.coms</a>
|
{"license": "apache-2.0"}
|
ArterisPlus/ArterisPlus
| null |
[
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T11:54:24+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
Arteris Plus Vidonge ni nini?
Arteris Plus kibonge ni nyongeza ya lishe ya mapinduzi iliyoundwa kusaidia afya ya moyo na mishipa. Imeundwa kwa ustadi kwa kutumia mchanganyiko wa viambato vya asili vinavyojulikana kwa athari zake za manufaa kwenye utendaji kazi wa moyo na udhibiti wa shinikizo la damu. Kirutubisho hiki kimeundwa ili kutimiza maisha yenye afya na kinaweza kutoa usaidizi katika kudumisha hali njema ya moyo na mishipa Arteris Plus Apoteket.
Tovuti rasmi:<a href="URL
<p><a href="URL <img src="URL alt="enter image description here"> </a></p>
<a href="URL Sasa!! Bofya kiungo kilicho hapa chini kwa maelezo zaidi na upate punguzo la 50% sasa... Haraka</a>
Tovuti rasmi:<a href="URL
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
text-generation
|
transformers
|
# `Stable LM 2 BRIEF 1.6B`
Fine-tuned chat model based on `stabilityai/stablelm-2-1_6b`. It was trained for 21 epochs using 1024 context windows and a mixture of small subsets of UltraChat and OASST2. Since the model saw shorter dialogs it tends to be less verbose than StabilityAI's 1.6B chat model `stabilityai/stablelm-2-zephyr-1_6b`.
Instruction format:
```
<|user|>
Hi!<|endoftext|>
<|assistant|>
Hello there!<|endoftext|>
```
maxim-saplin/stablelm-2-brief-1_6b:

stabilityai/stablelm-2-zephyr-1_6b:

Available in 16-bit HF safetensors and 16 and 8-bit GGUF.
Training code is [here](https://github.com/maxim-saplin/finetuning/blob/main/qlora.py)
# MT-Bench
```
########## First turn ##########
score
model turn
stablelm-2-brief-1_6b_2 1 3.240506
stablelm-2-brief-1_6b_3 1 3.202532
stablelm-2-brief-1_6b 1 2.850000*
########## Second turn ##########
score
model turn
stablelm-2-brief-1_6b_3 2 2.443038
stablelm-2-brief-1_6b_2 2 2.350000
stablelm-2-brief-1_6b 2 2.175000*
########## Average ##########
score
model
stablelm-2-brief-1_6b_3 2.822785
stablelm-2-brief-1_6b_2 2.792453
stablelm-2-brief-1_6b 2.512500*
```
`stablelm-2-zephyr-1_6b` scored 5.42
|
{"language": ["en"], "license": "other", "tags": ["causal-lm"], "datasets": ["HuggingFaceH4/ultrachat_200k", "g-ronimo/oasst2_top4k_en"], "pipeline_tag": "text-generation"}
|
maxim-saplin/stablelm-2-brief-1_6b
| null |
[
"transformers",
"safetensors",
"gguf",
"stablelm",
"text-generation",
"causal-lm",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:g-ronimo/oasst2_top4k_en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:54:35+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #gguf #stablelm #text-generation #causal-lm #conversational #en #dataset-HuggingFaceH4/ultrachat_200k #dataset-g-ronimo/oasst2_top4k_en #license-other #autotrain_compatible #endpoints_compatible #region-us
|
# 'Stable LM 2 BRIEF 1.6B'
Fine-tuned chat model based on 'stabilityai/stablelm-2-1_6b'. It was trained for 21 epochs using 1024 context windows and a mixture of small subsets of UltraChat and OASST2. Since the model saw shorter dialogs it tends to be less verbose than StabilityAI's 1.6B chat model 'stabilityai/stablelm-2-zephyr-1_6b'.
Instruction format:
maxim-saplin/stablelm-2-brief-1_6b:
!image/png
stabilityai/stablelm-2-zephyr-1_6b:
!image/png
Available in 16-bit HF safetensors and 16 and 8-bit GGUF.
Training code is here
# MT-Bench
'stablelm-2-zephyr-1_6b' scored 5.42
|
[
"# 'Stable LM 2 BRIEF 1.6B'\n\nFine-tuned chat model based on 'stabilityai/stablelm-2-1_6b'. It was trained for 21 epochs using 1024 context windows and a mixture of small subsets of UltraChat and OASST2. Since the model saw shorter dialogs it tends to be less verbose than StabilityAI's 1.6B chat model 'stabilityai/stablelm-2-zephyr-1_6b'.\n\nInstruction format:\n\n\nmaxim-saplin/stablelm-2-brief-1_6b:\n!image/png\n\nstabilityai/stablelm-2-zephyr-1_6b:\n!image/png\n\nAvailable in 16-bit HF safetensors and 16 and 8-bit GGUF.\n\nTraining code is here",
"# MT-Bench\n\n\n'stablelm-2-zephyr-1_6b' scored 5.42"
] |
[
"TAGS\n#transformers #safetensors #gguf #stablelm #text-generation #causal-lm #conversational #en #dataset-HuggingFaceH4/ultrachat_200k #dataset-g-ronimo/oasst2_top4k_en #license-other #autotrain_compatible #endpoints_compatible #region-us \n",
"# 'Stable LM 2 BRIEF 1.6B'\n\nFine-tuned chat model based on 'stabilityai/stablelm-2-1_6b'. It was trained for 21 epochs using 1024 context windows and a mixture of small subsets of UltraChat and OASST2. Since the model saw shorter dialogs it tends to be less verbose than StabilityAI's 1.6B chat model 'stabilityai/stablelm-2-zephyr-1_6b'.\n\nInstruction format:\n\n\nmaxim-saplin/stablelm-2-brief-1_6b:\n!image/png\n\nstabilityai/stablelm-2-zephyr-1_6b:\n!image/png\n\nAvailable in 16-bit HF safetensors and 16 and 8-bit GGUF.\n\nTraining code is here",
"# MT-Bench\n\n\n'stablelm-2-zephyr-1_6b' scored 5.42"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [jkhan447/sarcasm-detection-Bert-base-uncased](https://huggingface.co/jkhan447/sarcasm-detection-Bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5616
- Accuracy: 0.7233
- F1: 0.4483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "jkhan447/sarcasm-detection-Bert-base-uncased", "model-index": [{"name": "results", "results": []}]}
|
dianamihalache27/results
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:jkhan447/sarcasm-detection-Bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T11:55:12+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-jkhan447/sarcasm-detection-Bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# results
This model is a fine-tuned version of jkhan447/sarcasm-detection-Bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5616
- Accuracy: 0.7233
- F1: 0.4483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# results\n\nThis model is a fine-tuned version of jkhan447/sarcasm-detection-Bert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.5616\n- Accuracy: 0.7233\n- F1: 0.4483",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-jkhan447/sarcasm-detection-Bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# results\n\nThis model is a fine-tuned version of jkhan447/sarcasm-detection-Bert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.5616\n- Accuracy: 0.7233\n- F1: 0.4483",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-classifier-roberta1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [lectures](https://huggingface.co/datasets/AlexanderBenady/generated_lectures) dataset.
It achieves the following results on the test set:
- Loss: 0.5266
- Precision: 0.9244
- Recall: 0.9200
- F1-score: 0.9198
- Accuracy: 0.92
## Model description
The model was trained on a lectures dataset of 1000 rows of data. Hyperparameter tuning was also done to achieve these results.
## Intended uses & limitations
More information needed
## Training and evaluation data
The dataset was split into 80% training data, 10% validation data and 10% test data. We ensured that each split would have a proportional number of lectures per field.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| No log | 1.0 | 25 | 0.4560 | 0.9362 | 0.93 | 0.9308 | 0.93 |
| No log | 2.0 | 50 | 0.3287 | 0.9519 | 0.95 | 0.9505 | 0.95 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "accuracy"], "base_model": "roberta-base", "model-index": [{"name": "roberta-base-finetuned-classifier-roberta1", "results": []}]}
|
gserafico/roberta-base-finetuned-classifier-roberta1
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2024-04-13T11:58:55+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
roberta-base-finetuned-classifier-roberta1
==========================================
This model is a fine-tuned version of roberta-base on the lectures dataset.
It achieves the following results on the test set:
* Loss: 0.5266
* Precision: 0.9244
* Recall: 0.9200
* F1-score: 0.9198
* Accuracy: 0.92
Model description
-----------------
The model was trained on a lectures dataset of 1000 rows of data. Hyperparameter tuning was also done to achieve these results.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
The dataset was split into 80% training data, 10% validation data and 10% test data. We ensured that each split would have a proportional number of lectures per field.
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
feature-extraction
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_bge_ver10
This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on Vietnamese SQuAD dataset and 80% Zalo AI Legal Text dataset no segmented .
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "BAAI/bge-m3", "model-index": [{"name": "finetuned_bge_ver10", "results": []}]}
|
comet24082002/finetuned_bge_ver10
| null |
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"base_model:BAAI/bge-m3",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:01:53+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us
|
# finetuned_bge_ver10
This model is a fine-tuned version of BAAI/bge-m3 on Vietnamese SQuAD dataset and 80% Zalo AI Legal Text dataset no segmented .
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# finetuned_bge_ver10\n\nThis model is a fine-tuned version of BAAI/bge-m3 on Vietnamese SQuAD dataset and 80% Zalo AI Legal Text dataset no segmented .",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us \n",
"# finetuned_bge_ver10\n\nThis model is a fine-tuned version of BAAI/bge-m3 on Vietnamese SQuAD dataset and 80% Zalo AI Legal Text dataset no segmented .",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
adapter-transformers
|
# Adapter `BigTMiami/F_adapter_ia3_pretraining_P_20` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_split_25M_reviews_20_percent_condensed](https://huggingface.co/datasets/BigTMiami/amazon_split_25M_reviews_20_percent_condensed/) dataset and includes a prediction head for masked lm.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/F_adapter_ia3_pretraining_P_20", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_split_25M_reviews_20_percent_condensed"]}
|
BigTMiami/F_adapter_ia3_pretraining_P_20
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_split_25M_reviews_20_percent_condensed",
"region:us"
] | null |
2024-04-13T12:02:19+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us
|
# Adapter 'BigTMiami/F_adapter_ia3_pretraining_P_20' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'BigTMiami/F_adapter_ia3_pretraining_P_20' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us \n",
"# Adapter 'BigTMiami/F_adapter_ia3_pretraining_P_20' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
jhovany/Beto_Clasificar_Tweets_Mexicanos_DataAumen_Homomex2024
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:04:50+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
adapter-transformers
|
# Adapter `BigTMiami/G_adapter_compactor_pretraining_P_20` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_split_25M_reviews_20_percent_condensed](https://huggingface.co/datasets/BigTMiami/amazon_split_25M_reviews_20_percent_condensed/) dataset and includes a prediction head for masked lm.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/G_adapter_compactor_pretraining_P_20", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_split_25M_reviews_20_percent_condensed"]}
|
BigTMiami/G_adapter_compactor_pretraining_P_20
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_split_25M_reviews_20_percent_condensed",
"region:us"
] | null |
2024-04-13T12:07:43+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us
|
# Adapter 'BigTMiami/G_adapter_compactor_pretraining_P_20' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'BigTMiami/G_adapter_compactor_pretraining_P_20' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us \n",
"# Adapter 'BigTMiami/G_adapter_compactor_pretraining_P_20' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-python_code_instructions_18k_alpaca
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7264 | 1.0 | 3861 | 1.5890 |
| 1.6046 | 2.0 | 7722 | 1.5214 |
| 1.5359 | 3.0 | 11583 | 1.5063 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["generated_from_trainer"], "datasets": ["iamtarun/python_code_instructions_18k_alpaca"], "metrics": ["accuracy"], "base_model": "distilgpt2", "pipeline_tag": "text-generation", "model-index": [{"name": "distilgpt2-finetuned-python_code_instructions_18k_alpaca", "results": []}]}
|
Vishaltiwari2019/distilgpt2-finetuned-python_code_instructions_18k_alpaca
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"en",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"base_model:distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T12:09:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #en #dataset-iamtarun/python_code_instructions_18k_alpaca #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
distilgpt2-finetuned-python\_code\_instructions\_18k\_alpaca
============================================================
This model is a fine-tuned version of distilgpt2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5063
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #en #dataset-iamtarun/python_code_instructions_18k_alpaca #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["WizardLM/WizardMath-7B-V1.1", "NousResearch/Hermes-2-Pro-Mistral-7B"]}
|
mergekit-community/mergekit-slerp-jgwqzez
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:WizardLM/WizardMath-7B-V1.1",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T12:09:30+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-WizardLM/WizardMath-7B-V1.1 #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* WizardLM/WizardMath-7B-V1.1
* NousResearch/Hermes-2-Pro-Mistral-7B
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* WizardLM/WizardMath-7B-V1.1\n* NousResearch/Hermes-2-Pro-Mistral-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-WizardLM/WizardMath-7B-V1.1 #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* WizardLM/WizardMath-7B-V1.1\n* NousResearch/Hermes-2-Pro-Mistral-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-to-image
|
diffusers
|
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "zavychromaxl-v6"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/zavychromaxl-v6)
Model link: [View model](https://modelslab.com/models/zavychromaxl-v6)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "zavychromaxl-v6",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
{"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true}
|
stablediffusionapi/zavychromaxl-v6
| null |
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null |
2024-04-13T12:09:35+00:00
|
[] |
[] |
TAGS
#diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #has_space #diffusers-StableDiffusionXLPipeline #region-us
|
# API Inference
!generated from URL
## Get API Key
Get API key from ModelsLab API, No Payment needed.
Replace Key in below code, change model_id to "zavychromaxl-v6"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs
Try model for free: Generate Images
Model link: View model
View all models: View Models
import requests
import json
url = "URL
payload = URL({
"key": "your_api_key",
"model_id": "zavychromaxl-v6",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(URL)
> Use this coupon code to get 25% off DMGG0RBN
|
[
"# API Inference\n\n!generated from URL",
"## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"zavychromaxl-v6\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"zavychromaxl-v6\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN"
] |
[
"TAGS\n#diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #has_space #diffusers-StableDiffusionXLPipeline #region-us \n",
"# API Inference\n\n!generated from URL",
"## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"zavychromaxl-v6\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"zavychromaxl-v6\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb_classification_on_25M_full_pretrained_best_epoch_f1
This model is a fine-tuned version of [ltuzova/amazon_domain_pretrained_model](https://huggingface.co/ltuzova/amazon_domain_pretrained_model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1535
- Accuracy: 0.957
- F1 Macro: 0.9570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.2464 | 1.0 | 1250 | 0.2060 | 0.9492 | 0.9492 |
| 0.1454 | 2.0 | 2500 | 0.1566 | 0.9568 | 0.9568 |
| 0.0968 | 3.0 | 3750 | 0.2741 | 0.9524 | 0.9524 |
| 0.0535 | 4.0 | 5000 | 0.2514 | 0.9554 | 0.9554 |
| 0.0426 | 5.0 | 6250 | 0.2621 | 0.9542 | 0.9542 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "ltuzova/amazon_domain_pretrained_model", "model-index": [{"name": "imdb_classification_on_25M_full_pretrained_best_epoch_f1", "results": []}]}
|
ltuzova/imdb_classification_on_25M_full_pretrained_best_epoch_f1
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:ltuzova/amazon_domain_pretrained_model",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:12:17+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-ltuzova/amazon_domain_pretrained_model #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
imdb\_classification\_on\_25M\_full\_pretrained\_best\_epoch\_f1
================================================================
This model is a fine-tuned version of ltuzova/amazon\_domain\_pretrained\_model on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1535
* Accuracy: 0.957
* F1 Macro: 0.9570
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.06
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-ltuzova/amazon_domain_pretrained_model #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/CultriX/MonaCeption-7B-SLERP-SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "base_model": "CultriX/MonaCeption-7B-SLERP-SFT", "quantized_by": "mradermacher"}
|
mradermacher/MonaCeption-7B-SLERP-SFT-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:CultriX/MonaCeption-7B-SLERP-SFT",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:12:34+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-CultriX/MonaCeption-7B-SLERP-SFT #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-CultriX/MonaCeption-7B-SLERP-SFT #endpoints_compatible #region-us \n"
] |
null | null |
Just a mirror of files from https://github.com/Megvii-BaseDetection/YOLOX/blob/main/demo/ONNXRuntime/README.md
|
{"license": "apache-2.0"}
|
halffried/yolox
| null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T12:13:34+00:00
|
[] |
[] |
TAGS
#onnx #license-apache-2.0 #region-us
|
Just a mirror of files from URL
|
[] |
[
"TAGS\n#onnx #license-apache-2.0 #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_kindle_sentiment_analysis_final
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9292
- Accuracy: 0.6083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6325 | 0.01 | 10 | 1.5991 | 0.2008 |
| 1.582 | 0.02 | 20 | 1.5884 | 0.2975 |
| 1.5363 | 0.03 | 30 | 1.5598 | 0.275 |
| 1.5358 | 0.03 | 40 | 1.4794 | 0.3367 |
| 1.5065 | 0.04 | 50 | 1.3755 | 0.3825 |
| 1.392 | 0.05 | 60 | 1.2745 | 0.4458 |
| 1.3156 | 0.06 | 70 | 1.2164 | 0.4058 |
| 1.2387 | 0.07 | 80 | 1.2414 | 0.4283 |
| 1.3621 | 0.07 | 90 | 1.1632 | 0.4808 |
| 1.242 | 0.08 | 100 | 1.4167 | 0.365 |
| 1.242 | 0.09 | 110 | 1.2389 | 0.4367 |
| 1.2249 | 0.1 | 120 | 1.3337 | 0.4083 |
| 1.3734 | 0.11 | 130 | 1.1265 | 0.49 |
| 1.3003 | 0.12 | 140 | 1.1609 | 0.4675 |
| 1.0861 | 0.12 | 150 | 1.4087 | 0.4017 |
| 1.1882 | 0.13 | 160 | 1.1827 | 0.4442 |
| 1.2343 | 0.14 | 170 | 1.2829 | 0.43 |
| 1.1028 | 0.15 | 180 | 1.1621 | 0.505 |
| 1.3076 | 0.16 | 190 | 1.2701 | 0.405 |
| 1.1405 | 0.17 | 200 | 1.2041 | 0.4458 |
| 1.2308 | 0.17 | 210 | 1.1216 | 0.4558 |
| 1.2857 | 0.18 | 220 | 1.1178 | 0.47 |
| 1.1742 | 0.19 | 230 | 1.1206 | 0.5225 |
| 1.1472 | 0.2 | 240 | 1.1125 | 0.4925 |
| 1.2596 | 0.21 | 250 | 1.1274 | 0.4867 |
| 1.1323 | 0.22 | 260 | 1.0597 | 0.5342 |
| 1.0149 | 0.23 | 270 | 1.0703 | 0.5225 |
| 1.1787 | 0.23 | 280 | 1.1797 | 0.485 |
| 1.1165 | 0.24 | 290 | 1.1038 | 0.51 |
| 1.0636 | 0.25 | 300 | 1.0767 | 0.5392 |
| 1.1375 | 0.26 | 310 | 1.0852 | 0.525 |
| 1.2661 | 0.27 | 320 | 1.1114 | 0.4967 |
| 1.168 | 0.28 | 330 | 1.0562 | 0.5258 |
| 1.2809 | 0.28 | 340 | 1.0941 | 0.5117 |
| 1.1089 | 0.29 | 350 | 1.0778 | 0.4975 |
| 1.0561 | 0.3 | 360 | 1.1326 | 0.4467 |
| 1.1941 | 0.31 | 370 | 1.0203 | 0.5417 |
| 1.0317 | 0.32 | 380 | 1.0387 | 0.525 |
| 1.1056 | 0.33 | 390 | 1.0369 | 0.5342 |
| 1.2667 | 0.33 | 400 | 1.0146 | 0.5492 |
| 1.1629 | 0.34 | 410 | 1.0507 | 0.5442 |
| 0.9479 | 0.35 | 420 | 1.0260 | 0.5408 |
| 1.049 | 0.36 | 430 | 1.0202 | 0.5292 |
| 1.0778 | 0.37 | 440 | 0.9926 | 0.5583 |
| 0.8634 | 0.38 | 450 | 1.0885 | 0.5275 |
| 1.1605 | 0.38 | 460 | 0.9796 | 0.5558 |
| 1.0557 | 0.39 | 470 | 1.0047 | 0.5475 |
| 1.0388 | 0.4 | 480 | 1.0342 | 0.5508 |
| 0.9782 | 0.41 | 490 | 1.0917 | 0.5008 |
| 1.0986 | 0.42 | 500 | 1.1414 | 0.5125 |
| 1.0982 | 0.42 | 510 | 1.0110 | 0.5433 |
| 1.0264 | 0.43 | 520 | 1.0003 | 0.5542 |
| 1.045 | 0.44 | 530 | 1.0478 | 0.5333 |
| 1.0532 | 0.45 | 540 | 1.0700 | 0.5133 |
| 1.1025 | 0.46 | 550 | 1.0191 | 0.5367 |
| 1.1531 | 0.47 | 560 | 1.0951 | 0.5325 |
| 1.0562 | 0.47 | 570 | 1.0527 | 0.5208 |
| 1.0935 | 0.48 | 580 | 1.0087 | 0.5525 |
| 0.9918 | 0.49 | 590 | 1.1201 | 0.49 |
| 1.0098 | 0.5 | 600 | 1.0244 | 0.5425 |
| 1.0064 | 0.51 | 610 | 1.0375 | 0.54 |
| 1.0049 | 0.52 | 620 | 1.0123 | 0.545 |
| 1.2486 | 0.53 | 630 | 1.1018 | 0.515 |
| 1.1461 | 0.53 | 640 | 1.0022 | 0.5433 |
| 1.0624 | 0.54 | 650 | 1.0048 | 0.5483 |
| 1.1099 | 0.55 | 660 | 0.9848 | 0.5575 |
| 1.0624 | 0.56 | 670 | 0.9804 | 0.5575 |
| 1.0554 | 0.57 | 680 | 1.0003 | 0.5458 |
| 1.009 | 0.57 | 690 | 1.0566 | 0.5183 |
| 1.0326 | 0.58 | 700 | 1.0969 | 0.5125 |
| 1.1079 | 0.59 | 710 | 1.0181 | 0.5383 |
| 0.9814 | 0.6 | 720 | 0.9950 | 0.5442 |
| 0.9225 | 0.61 | 730 | 1.0021 | 0.5533 |
| 1.0408 | 0.62 | 740 | 1.0248 | 0.555 |
| 1.033 | 0.62 | 750 | 1.0587 | 0.5142 |
| 0.9995 | 0.63 | 760 | 0.9895 | 0.5592 |
| 0.9837 | 0.64 | 770 | 0.9991 | 0.5458 |
| 1.0901 | 0.65 | 780 | 1.0011 | 0.5583 |
| 0.89 | 0.66 | 790 | 1.1504 | 0.5108 |
| 1.2833 | 0.67 | 800 | 1.0055 | 0.545 |
| 0.9868 | 0.68 | 810 | 1.0228 | 0.5333 |
| 1.0203 | 0.68 | 820 | 0.9798 | 0.5517 |
| 0.9778 | 0.69 | 830 | 0.9739 | 0.5583 |
| 1.0526 | 0.7 | 840 | 0.9525 | 0.5592 |
| 1.0748 | 0.71 | 850 | 0.9648 | 0.5625 |
| 1.0818 | 0.72 | 860 | 0.9868 | 0.5425 |
| 1.1718 | 0.72 | 870 | 1.0041 | 0.54 |
| 1.052 | 0.73 | 880 | 0.9926 | 0.5633 |
| 0.9856 | 0.74 | 890 | 1.0029 | 0.5367 |
| 1.0623 | 0.75 | 900 | 1.0920 | 0.5375 |
| 1.2347 | 0.76 | 910 | 0.9949 | 0.5558 |
| 1.0313 | 0.77 | 920 | 0.9606 | 0.5583 |
| 1.0903 | 0.78 | 930 | 0.9673 | 0.5542 |
| 1.0952 | 0.78 | 940 | 0.9778 | 0.55 |
| 0.9689 | 0.79 | 950 | 0.9669 | 0.555 |
| 1.1173 | 0.8 | 960 | 0.9415 | 0.57 |
| 0.9672 | 0.81 | 970 | 0.9393 | 0.575 |
| 0.8624 | 0.82 | 980 | 0.9409 | 0.5767 |
| 0.9683 | 0.82 | 990 | 0.9770 | 0.5467 |
| 0.9681 | 0.83 | 1000 | 1.0163 | 0.5383 |
| 1.1862 | 0.84 | 1010 | 0.9532 | 0.5692 |
| 0.9935 | 0.85 | 1020 | 0.9815 | 0.5425 |
| 1.1011 | 0.86 | 1030 | 1.0984 | 0.5167 |
| 0.9626 | 0.87 | 1040 | 0.9493 | 0.5675 |
| 1.0268 | 0.88 | 1050 | 1.0367 | 0.5583 |
| 1.15 | 0.88 | 1060 | 0.9533 | 0.5792 |
| 1.1613 | 0.89 | 1070 | 0.9860 | 0.5558 |
| 0.8876 | 0.9 | 1080 | 0.9948 | 0.5583 |
| 0.9916 | 0.91 | 1090 | 0.9418 | 0.5775 |
| 1.034 | 0.92 | 1100 | 0.9532 | 0.5617 |
| 1.2078 | 0.93 | 1110 | 0.9457 | 0.5658 |
| 0.9293 | 0.93 | 1120 | 0.9474 | 0.5642 |
| 0.9945 | 0.94 | 1130 | 0.9856 | 0.5467 |
| 1.1483 | 0.95 | 1140 | 0.9769 | 0.5683 |
| 1.0652 | 0.96 | 1150 | 0.9333 | 0.5808 |
| 0.943 | 0.97 | 1160 | 0.9613 | 0.5717 |
| 0.8959 | 0.97 | 1170 | 0.9292 | 0.5742 |
| 1.0558 | 0.98 | 1180 | 0.9216 | 0.5833 |
| 0.9603 | 0.99 | 1190 | 0.9352 | 0.5792 |
| 1.0854 | 1.0 | 1200 | 0.9403 | 0.5833 |
| 0.7435 | 1.01 | 1210 | 0.9177 | 0.5875 |
| 0.8798 | 1.02 | 1220 | 0.9625 | 0.5592 |
| 0.7921 | 1.02 | 1230 | 0.9349 | 0.5717 |
| 0.6194 | 1.03 | 1240 | 0.9355 | 0.5958 |
| 0.7193 | 1.04 | 1250 | 0.9540 | 0.5908 |
| 0.7902 | 1.05 | 1260 | 0.9912 | 0.5842 |
| 0.9728 | 1.06 | 1270 | 0.9568 | 0.5817 |
| 0.8948 | 1.07 | 1280 | 0.9572 | 0.5667 |
| 0.7172 | 1.07 | 1290 | 1.0331 | 0.5608 |
| 0.7905 | 1.08 | 1300 | 0.9523 | 0.5692 |
| 0.7699 | 1.09 | 1310 | 1.0065 | 0.5625 |
| 0.9043 | 1.1 | 1320 | 1.0960 | 0.5208 |
| 0.8665 | 1.11 | 1330 | 0.9775 | 0.565 |
| 0.7809 | 1.12 | 1340 | 0.9941 | 0.5542 |
| 0.6454 | 1.12 | 1350 | 1.0438 | 0.5675 |
| 0.8919 | 1.13 | 1360 | 0.9652 | 0.5742 |
| 0.6891 | 1.14 | 1370 | 1.0144 | 0.5733 |
| 0.7775 | 1.15 | 1380 | 0.9862 | 0.5733 |
| 0.7469 | 1.16 | 1390 | 0.9685 | 0.58 |
| 0.7795 | 1.17 | 1400 | 0.9845 | 0.5725 |
| 0.8594 | 1.18 | 1410 | 0.9833 | 0.5683 |
| 0.8161 | 1.18 | 1420 | 0.9933 | 0.57 |
| 0.8836 | 1.19 | 1430 | 1.0641 | 0.5383 |
| 0.8316 | 1.2 | 1440 | 0.9770 | 0.5642 |
| 0.7242 | 1.21 | 1450 | 0.9647 | 0.5592 |
| 0.7889 | 1.22 | 1460 | 1.0378 | 0.5525 |
| 0.8997 | 1.23 | 1470 | 1.0074 | 0.5717 |
| 0.8315 | 1.23 | 1480 | 0.9810 | 0.585 |
| 0.928 | 1.24 | 1490 | 0.9673 | 0.585 |
| 0.7716 | 1.25 | 1500 | 0.9758 | 0.5717 |
| 0.7998 | 1.26 | 1510 | 0.9980 | 0.58 |
| 0.6825 | 1.27 | 1520 | 0.9766 | 0.5867 |
| 0.8354 | 1.27 | 1530 | 0.9622 | 0.5783 |
| 0.7766 | 1.28 | 1540 | 0.9506 | 0.59 |
| 0.7776 | 1.29 | 1550 | 0.9469 | 0.5992 |
| 0.772 | 1.3 | 1560 | 0.9380 | 0.5908 |
| 0.6561 | 1.31 | 1570 | 0.9896 | 0.5783 |
| 0.9753 | 1.32 | 1580 | 0.9428 | 0.59 |
| 0.685 | 1.32 | 1590 | 0.9492 | 0.59 |
| 0.7375 | 1.33 | 1600 | 0.9792 | 0.5825 |
| 0.7753 | 1.34 | 1610 | 0.9611 | 0.5908 |
| 0.7627 | 1.35 | 1620 | 0.9787 | 0.5842 |
| 0.8135 | 1.36 | 1630 | 1.0000 | 0.5617 |
| 0.8031 | 1.37 | 1640 | 0.9770 | 0.5758 |
| 0.7973 | 1.38 | 1650 | 1.0509 | 0.58 |
| 0.7865 | 1.38 | 1660 | 0.9632 | 0.5925 |
| 0.8153 | 1.39 | 1670 | 0.9572 | 0.5875 |
| 0.7022 | 1.4 | 1680 | 0.9872 | 0.5767 |
| 0.7709 | 1.41 | 1690 | 0.9615 | 0.5833 |
| 0.8874 | 1.42 | 1700 | 0.9548 | 0.5817 |
| 0.7961 | 1.43 | 1710 | 0.9472 | 0.5967 |
| 0.7718 | 1.43 | 1720 | 0.9849 | 0.5725 |
| 0.7515 | 1.44 | 1730 | 0.9825 | 0.5742 |
| 0.9256 | 1.45 | 1740 | 0.9514 | 0.5817 |
| 0.8487 | 1.46 | 1750 | 0.9591 | 0.5908 |
| 0.7539 | 1.47 | 1760 | 0.9509 | 0.5975 |
| 0.7333 | 1.48 | 1770 | 0.9835 | 0.5883 |
| 0.6634 | 1.48 | 1780 | 1.0435 | 0.5675 |
| 0.7045 | 1.49 | 1790 | 0.9614 | 0.5908 |
| 0.8598 | 1.5 | 1800 | 0.9362 | 0.595 |
| 0.8418 | 1.51 | 1810 | 0.9409 | 0.5958 |
| 0.8074 | 1.52 | 1820 | 0.9379 | 0.5992 |
| 0.8183 | 1.52 | 1830 | 0.9296 | 0.6042 |
| 0.8257 | 1.53 | 1840 | 0.9324 | 0.6058 |
| 0.6653 | 1.54 | 1850 | 0.9261 | 0.605 |
| 0.7399 | 1.55 | 1860 | 0.9448 | 0.6033 |
| 0.762 | 1.56 | 1870 | 0.9568 | 0.5942 |
| 0.6583 | 1.57 | 1880 | 0.9650 | 0.5833 |
| 0.9037 | 1.57 | 1890 | 0.9301 | 0.5967 |
| 0.7022 | 1.58 | 1900 | 0.9499 | 0.6042 |
| 0.8169 | 1.59 | 1910 | 0.9537 | 0.6025 |
| 0.5907 | 1.6 | 1920 | 0.9700 | 0.595 |
| 0.7397 | 1.61 | 1930 | 0.9684 | 0.5942 |
| 0.7224 | 1.62 | 1940 | 0.9516 | 0.6 |
| 0.664 | 1.62 | 1950 | 0.9405 | 0.5933 |
| 0.6477 | 1.63 | 1960 | 0.9474 | 0.5967 |
| 0.8474 | 1.64 | 1970 | 0.9540 | 0.5883 |
| 0.6596 | 1.65 | 1980 | 0.9482 | 0.5992 |
| 0.7187 | 1.66 | 1990 | 0.9404 | 0.5933 |
| 0.7754 | 1.67 | 2000 | 0.9389 | 0.5892 |
| 0.7789 | 1.68 | 2010 | 0.9369 | 0.5942 |
| 0.8082 | 1.68 | 2020 | 0.9288 | 0.5983 |
| 0.7271 | 1.69 | 2030 | 0.9319 | 0.5992 |
| 0.7631 | 1.7 | 2040 | 0.9344 | 0.6 |
| 0.8105 | 1.71 | 2050 | 0.9538 | 0.5925 |
| 0.7941 | 1.72 | 2060 | 0.9625 | 0.5958 |
| 0.8801 | 1.73 | 2070 | 0.9412 | 0.5975 |
| 0.6754 | 1.73 | 2080 | 0.9262 | 0.6033 |
| 0.7901 | 1.74 | 2090 | 0.9301 | 0.5983 |
| 0.9318 | 1.75 | 2100 | 0.9361 | 0.5992 |
| 0.8394 | 1.76 | 2110 | 0.9198 | 0.6008 |
| 0.6138 | 1.77 | 2120 | 0.9234 | 0.6083 |
| 0.7199 | 1.77 | 2130 | 0.9233 | 0.6083 |
| 0.7092 | 1.78 | 2140 | 0.9219 | 0.6033 |
| 0.8862 | 1.79 | 2150 | 0.9199 | 0.6067 |
| 0.7834 | 1.8 | 2160 | 0.9199 | 0.6025 |
| 0.729 | 1.81 | 2170 | 0.9206 | 0.6058 |
| 0.6533 | 1.82 | 2180 | 0.9252 | 0.605 |
| 0.7499 | 1.82 | 2190 | 0.9284 | 0.6125 |
| 0.7665 | 1.83 | 2200 | 0.9295 | 0.6092 |
| 0.5497 | 1.84 | 2210 | 0.9369 | 0.6083 |
| 0.8775 | 1.85 | 2220 | 0.9402 | 0.61 |
| 0.6465 | 1.86 | 2230 | 0.9389 | 0.6092 |
| 0.573 | 1.87 | 2240 | 0.9369 | 0.6125 |
| 0.6852 | 1.88 | 2250 | 0.9368 | 0.6125 |
| 0.6715 | 1.88 | 2260 | 0.9346 | 0.6142 |
| 0.6263 | 1.89 | 2270 | 0.9339 | 0.6133 |
| 0.5795 | 1.9 | 2280 | 0.9313 | 0.6125 |
| 0.681 | 1.91 | 2290 | 0.9347 | 0.6083 |
| 0.7992 | 1.92 | 2300 | 0.9342 | 0.6125 |
| 0.6634 | 1.93 | 2310 | 0.9323 | 0.6117 |
| 0.7184 | 1.93 | 2320 | 0.9310 | 0.6092 |
| 0.6863 | 1.94 | 2330 | 0.9318 | 0.61 |
| 0.687 | 1.95 | 2340 | 0.9336 | 0.6117 |
| 0.7697 | 1.96 | 2350 | 0.9348 | 0.6133 |
| 0.7281 | 1.97 | 2360 | 0.9332 | 0.6117 |
| 0.7628 | 1.98 | 2370 | 0.9304 | 0.6108 |
| 0.7419 | 1.98 | 2380 | 0.9297 | 0.6075 |
| 0.7973 | 1.99 | 2390 | 0.9292 | 0.6075 |
| 0.5734 | 2.0 | 2400 | 0.9292 | 0.6083 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "amazon_kindle_sentiment_analysis_final", "results": []}]}
|
denise227/amazon_kindle_sentiment_analysis_final
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:14:00+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
amazon\_kindle\_sentiment\_analysis\_final
==========================================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9292
* Accuracy: 0.6083
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-deepak
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "tinyllama-deepak", "results": []}]}
|
deepakdevfocaloid/tinyllama-deepak
| null |
[
"peft",
"tensorboard",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T12:14:13+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #llama #trl #sft #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
|
# tinyllama-deepak
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# tinyllama-deepak\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 250\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #llama #trl #sft #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n",
"# tinyllama-deepak\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 250\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
unrented5443/p96urzm
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:14:34+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results2
This model is a fine-tuned version of [jkhan447/sarcasm-detection-RoBerta-base](https://huggingface.co/jkhan447/sarcasm-detection-RoBerta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6135
- Accuracy: 0.7133
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "jkhan447/sarcasm-detection-RoBerta-base", "model-index": [{"name": "results2", "results": []}]}
|
dianamihalache27/results2
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:jkhan447/sarcasm-detection-RoBerta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:23:06+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-jkhan447/sarcasm-detection-RoBerta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# results2
This model is a fine-tuned version of jkhan447/sarcasm-detection-RoBerta-base on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6135
- Accuracy: 0.7133
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# results2\n\nThis model is a fine-tuned version of jkhan447/sarcasm-detection-RoBerta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6135\n- Accuracy: 0.7133\n- F1: 0.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-jkhan447/sarcasm-detection-RoBerta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# results2\n\nThis model is a fine-tuned version of jkhan447/sarcasm-detection-RoBerta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6135\n- Accuracy: 0.7133\n- F1: 0.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** prince-canuma
- **License:** apache-2.0
- **Finetuned from model :** prince-canuma/Damysus-Coder-v0.1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "prince-canuma/Damysus-Coder-v0.1"}
|
prince-canuma/Damysus-Coder-v0.1-4bit
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:prince-canuma/Damysus-Coder-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null |
2024-04-13T12:23:45+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-prince-canuma/Damysus-Coder-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: prince-canuma
- License: apache-2.0
- Finetuned from model : prince-canuma/Damysus-Coder-v0.1
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: prince-canuma\n- License: apache-2.0\n- Finetuned from model : prince-canuma/Damysus-Coder-v0.1\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-prince-canuma/Damysus-Coder-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: prince-canuma\n- License: apache-2.0\n- Finetuned from model : prince-canuma/Damysus-Coder-v0.1\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
## Matter 7B - 0.2 - DPO (Mistral 7B Finetune)
DPO version of [Matter 7B](https://huggingface.co/0-hero/Matter-0.2-7B) fine-tuned on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.2-alpha), which is curated from over 35 datsets analyzing >6B tokens
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
{"language": ["en"], "license": "apache-2.0", "datasets": ["0-hero/Matter-0.2-alpha"]}
|
0-hero/Matter-0.2-7B-DPO
| null |
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:0-hero/Matter-0.2-alpha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T12:24:15+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #mistral #text-generation #conversational #en #dataset-0-hero/Matter-0.2-alpha #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Matter 7B - 0.2 - DPO (Mistral 7B Finetune)
DPO version of Matter 7B fine-tuned on the Matter dataset, which is curated from over 35 datsets analyzing >6B tokens
### Training
Prompt format: This model uses ChatML prompt format.
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
|
[
"## Matter 7B - 0.2 - DPO (Mistral 7B Finetune)\n\nDPO version of Matter 7B fine-tuned on the Matter dataset, which is curated from over 35 datsets analyzing >6B tokens",
"### Training\n\nPrompt format: This model uses ChatML prompt format.",
"### Function Calling\n\nModel also supports function calling. Additional tokens for function calling \n\nModel function call tokens\n- <|begin_func|> - Function call start token\n- <|end_func|> - Function call end token\n\nFunction call response tokens\n- <|begin_func_response|> - Function response start token\n- <|end_func_response|> - Function response end token\n\nExample"
] |
[
"TAGS\n#transformers #pytorch #safetensors #mistral #text-generation #conversational #en #dataset-0-hero/Matter-0.2-alpha #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Matter 7B - 0.2 - DPO (Mistral 7B Finetune)\n\nDPO version of Matter 7B fine-tuned on the Matter dataset, which is curated from over 35 datsets analyzing >6B tokens",
"### Training\n\nPrompt format: This model uses ChatML prompt format.",
"### Function Calling\n\nModel also supports function calling. Additional tokens for function calling \n\nModel function call tokens\n- <|begin_func|> - Function call start token\n- <|end_func|> - Function call end token\n\nFunction call response tokens\n- <|begin_func_response|> - Function response start token\n- <|end_func_response|> - Function response end token\n\nExample"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_helpfulness_classification_on_5M_full_pretrained_best_epoch_f1
This model is a fine-tuned version of [BigTMiami/amazon_pretraining_5M_model_corrected](https://huggingface.co/BigTMiami/amazon_pretraining_5M_model_corrected) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4124
- Accuracy: 0.8726
- F1 Macro: 0.7000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3151 | 1.0 | 7204 | 0.3359 | 0.8728 | 0.6273 |
| 0.2883 | 2.0 | 14408 | 0.3229 | 0.8738 | 0.6540 |
| 0.2622 | 3.0 | 21612 | 0.3909 | 0.87 | 0.6323 |
| 0.2539 | 4.0 | 28816 | 0.4227 | 0.8696 | 0.7007 |
| 0.2041 | 5.0 | 36020 | 0.4938 | 0.8712 | 0.6846 |
| 0.1532 | 6.0 | 43224 | 0.6590 | 0.8636 | 0.6852 |
| 0.1293 | 7.0 | 50428 | 0.8322 | 0.8622 | 0.6203 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "BigTMiami/amazon_pretraining_5M_model_corrected", "model-index": [{"name": "amazon_helpfulness_classification_on_5M_full_pretrained_best_epoch_f1", "results": []}]}
|
ltuzova/amazon_helpfulness_classification_on_5M_full_pretrained_best_epoch_f1
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:BigTMiami/amazon_pretraining_5M_model_corrected",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:28:36+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-BigTMiami/amazon_pretraining_5M_model_corrected #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
amazon\_helpfulness\_classification\_on\_5M\_full\_pretrained\_best\_epoch\_f1
==============================================================================
This model is a fine-tuned version of BigTMiami/amazon\_pretraining\_5M\_model\_corrected on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4124
* Accuracy: 0.8726
* F1 Macro: 0.7000
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.06
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-BigTMiami/amazon_pretraining_5M_model_corrected #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2-finetuned-fintech
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.698 | 1.0 | 2069 | 5.4277 |
| 5.5758 | 2.0 | 4138 | 5.3573 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["generator"], "base_model": "cointegrated/rubert-tiny2", "model-index": [{"name": "rubert-tiny2-finetuned-fintech", "results": []}]}
|
Pastushoc/rubert-tiny2-finetuned-fintech
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:generator",
"base_model:cointegrated/rubert-tiny2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:28:41+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #dataset-generator #base_model-cointegrated/rubert-tiny2 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
rubert-tiny2-finetuned-fintech
==============================
This model is a fine-tuned version of cointegrated/rubert-tiny2 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 5.3573
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #dataset-generator #base_model-cointegrated/rubert-tiny2 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-to-audio
|
transformers
|
# Tango 2: Aligning Diffusion-based Text-to-Audio Generative Models through Direct Preference Optimization
🎵 We developed **Tango 2** building upon **Tango** for text-to-audio generation. Tango 2 was initialized with the Tango-full-ft checkpoint and underwent alignment training using DPO on audio-alpaca, a pairwise text-to-audio preference dataset. 🎶
[Read the paper](https://arxiv.org/abs/2404.09956)
## Code
Our code is released here: [https://github.com/declare-lab/tango](https://github.com/declare-lab/tango)
Please follow the instructions in the repository for installation, usage and experiments.
## Quickstart Guide
Download the **Tango 2** model and generate audio from a text prompt:
```python
import IPython
import soundfile as sf
from tango import Tango
tango = Tango("declare-lab/tango2")
prompt = "An audience cheering and clapping"
audio = tango.generate(prompt)
sf.write(f"{prompt}.wav", audio, samplerate=16000)
IPython.display.Audio(data=audio, rate=16000)
```
The model will be automatically downloaded and saved in cache. Subsequent runs will load the model directly from cache.
The `generate` function uses 100 steps by default to sample from the latent diffusion model. We recommend using 200 steps for generating better quality audios. This comes at the cost of increased run-time.
```python
prompt = "Rolling thunder with lightning strikes"
audio = tango.generate(prompt, steps=200)
IPython.display.Audio(data=audio, rate=16000)
```
Use the `generate_for_batch` function to generate multiple audio samples for a batch of text prompts:
```python
prompts = [
"A car engine revving",
"A dog barks and rustles with some clicking",
"Water flowing and trickling"
]
audios = tango.generate_for_batch(prompts, samples=2)
```
This will generate two samples for each of the three text prompts.
|
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "tags": ["text-to-audio"], "datasets": ["bjoernp/AudioCaps", "declare-lab/audio_alpaca"], "pipeline_tag": "text-to-audio"}
|
declare-lab/tango2
| null |
[
"transformers",
"text-to-audio",
"en",
"dataset:bjoernp/AudioCaps",
"dataset:declare-lab/audio_alpaca",
"arxiv:2404.09956",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2024-04-13T12:30:03+00:00
|
[
"2404.09956"
] |
[
"en"
] |
TAGS
#transformers #text-to-audio #en #dataset-bjoernp/AudioCaps #dataset-declare-lab/audio_alpaca #arxiv-2404.09956 #license-cc-by-nc-sa-4.0 #endpoints_compatible #has_space #region-us
|
# Tango 2: Aligning Diffusion-based Text-to-Audio Generative Models through Direct Preference Optimization
We developed Tango 2 building upon Tango for text-to-audio generation. Tango 2 was initialized with the Tango-full-ft checkpoint and underwent alignment training using DPO on audio-alpaca, a pairwise text-to-audio preference dataset.
Read the paper
## Code
Our code is released here: URL
Please follow the instructions in the repository for installation, usage and experiments.
## Quickstart Guide
Download the Tango 2 model and generate audio from a text prompt:
The model will be automatically downloaded and saved in cache. Subsequent runs will load the model directly from cache.
The 'generate' function uses 100 steps by default to sample from the latent diffusion model. We recommend using 200 steps for generating better quality audios. This comes at the cost of increased run-time.
Use the 'generate_for_batch' function to generate multiple audio samples for a batch of text prompts:
This will generate two samples for each of the three text prompts.
|
[
"# Tango 2: Aligning Diffusion-based Text-to-Audio Generative Models through Direct Preference Optimization\n\n We developed Tango 2 building upon Tango for text-to-audio generation. Tango 2 was initialized with the Tango-full-ft checkpoint and underwent alignment training using DPO on audio-alpaca, a pairwise text-to-audio preference dataset. \n\nRead the paper",
"## Code\n\nOur code is released here: URL\n\n\nPlease follow the instructions in the repository for installation, usage and experiments.",
"## Quickstart Guide\n\nDownload the Tango 2 model and generate audio from a text prompt:\n\n\n\nThe model will be automatically downloaded and saved in cache. Subsequent runs will load the model directly from cache.\n\nThe 'generate' function uses 100 steps by default to sample from the latent diffusion model. We recommend using 200 steps for generating better quality audios. This comes at the cost of increased run-time.\n\n\n\n\nUse the 'generate_for_batch' function to generate multiple audio samples for a batch of text prompts:\n\n\nThis will generate two samples for each of the three text prompts."
] |
[
"TAGS\n#transformers #text-to-audio #en #dataset-bjoernp/AudioCaps #dataset-declare-lab/audio_alpaca #arxiv-2404.09956 #license-cc-by-nc-sa-4.0 #endpoints_compatible #has_space #region-us \n",
"# Tango 2: Aligning Diffusion-based Text-to-Audio Generative Models through Direct Preference Optimization\n\n We developed Tango 2 building upon Tango for text-to-audio generation. Tango 2 was initialized with the Tango-full-ft checkpoint and underwent alignment training using DPO on audio-alpaca, a pairwise text-to-audio preference dataset. \n\nRead the paper",
"## Code\n\nOur code is released here: URL\n\n\nPlease follow the instructions in the repository for installation, usage and experiments.",
"## Quickstart Guide\n\nDownload the Tango 2 model and generate audio from a text prompt:\n\n\n\nThe model will be automatically downloaded and saved in cache. Subsequent runs will load the model directly from cache.\n\nThe 'generate' function uses 100 steps by default to sample from the latent diffusion model. We recommend using 200 steps for generating better quality audios. This comes at the cost of increased run-time.\n\n\n\n\nUse the 'generate_for_batch' function to generate multiple audio samples for a batch of text prompts:\n\n\nThis will generate two samples for each of the three text prompts."
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
cilantro9246/8tyttyi
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:30:51+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Serj/intent-classifier-flan-t5-small
| null |
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T12:31:46+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
adapter-transformers
|
# Adapter `BigTMiami/F_adapter_ia3_classification_C_30` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/F_adapter_ia3_classification_C_30", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_helpfulness"]}
|
BigTMiami/F_adapter_ia3_classification_C_30
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
] | null |
2024-04-13T12:32:30+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
|
# Adapter 'BigTMiami/F_adapter_ia3_classification_C_30' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'BigTMiami/F_adapter_ia3_classification_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n",
"# Adapter 'BigTMiami/F_adapter_ia3_classification_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation
|
gguf
|
## About
static quantize of https://huggingface.co/Vezora/Mistral-22B-v0.2
iQ Quants can be found here(Richard Erkhov's work): https://huggingface.co/RichardErkhov/Vezora_-_Mistral-22B-v0.2-gguf
## Provided Quants
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-22B-v0.2-Q5_K_M.gguf](https://huggingface.co/NLPark/Mistral-22B-v0.2-GGUF/blob/main/Mistral-22B-v0.1-Q5_K_M.gguf) | Q5_K_M | 15.71GB | High quality, *recommended*. |
| [Mistral-22B-v0.2-Q4_K_M.gguf](https://huggingface.co/NLPark/Mistral-22B-v0.2-GGUF/blob/main/Mistral-22B-v0.1-Q4_K_M.gguf) | Q4_K_M | 13.33GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Mistral-22B-v0.2-Q4_K_S.gguf](https://huggingface.co/NLPark/Mistral-22B-v0.2-GGUF/blob/main/Mistral-22B-v0.1-Q4_K_S.gguf) | Q4_K_S | 12.65GB | Slightly lower performance than Q4_K_M, fastest, best choice for 16G RAM devices, *recommended*. |
| [Mistral-22B-v0.2-Q3_K_M.gguf](https://huggingface.co/NLPark/Mistral-22B-v0.2-GGUF/blob/main/Mistral-22B-v0.1-Q3_K_M.gguf) | Q3_K_M | 10.75GB | Even lower quality. |
| [Mistral-22B-v0.2-Q2_K.gguf](https://huggingface.co/NLPark/Mistral-22B-v0.2-GGUF/blob/main/Mistral-22B-v0.1-Q2_K.gguf) | Q2_K | 8.26GB | Very low quality. |
|
{"license": "apache-2.0", "library_name": "gguf", "pipeline_tag": "text-generation", "base_model": "Vezora/Mistral-22B-v0.2"}
|
NLPark/Mistral-22B-v0.2-GGUF
| null |
[
"gguf",
"text-generation",
"base_model:Vezora/Mistral-22B-v0.2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-13T12:34:47+00:00
|
[] |
[] |
TAGS
#gguf #text-generation #base_model-Vezora/Mistral-22B-v0.2 #license-apache-2.0 #region-us
|
About
-----
static quantize of URL
iQ Quants can be found here(Richard Erkhov's work): URL
Provided Quants
---------------
|
[] |
[
"TAGS\n#gguf #text-generation #base_model-Vezora/Mistral-22B-v0.2 #license-apache-2.0 #region-us \n"
] |
image-segmentation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b2-seed63-apr-13-v1
This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on the unreal-hug/REAL_DATASET_SEG_401_6_lbls dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7138
- Mean Iou: 0.1266
- Mean Accuracy: 0.2136
- Overall Accuracy: 0.4273
- Accuracy Unlabeled: nan
- Accuracy Lv: 0.6939
- Accuracy Rv: 0.0982
- Accuracy Ra: 0.1706
- Accuracy La: 0.5041
- Accuracy Vs: 0.0
- Accuracy As: 0.0
- Accuracy Mk: 0.0
- Accuracy Tk: nan
- Accuracy Asd: 0.0557
- Accuracy Vsd: 0.2283
- Accuracy Ak: 0.3849
- Iou Unlabeled: 0.0
- Iou Lv: 0.4965
- Iou Rv: 0.0899
- Iou Ra: 0.1288
- Iou La: 0.2845
- Iou Vs: 0.0
- Iou As: 0.0
- Iou Mk: 0.0
- Iou Tk: 0.0
- Iou Asd: 0.0462
- Iou Vsd: 0.1513
- Iou Ak: 0.3225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Lv | Accuracy Rv | Accuracy Ra | Accuracy La | Accuracy Vs | Accuracy As | Accuracy Mk | Accuracy Tk | Accuracy Asd | Accuracy Vsd | Accuracy Ak | Iou Unlabeled | Iou Lv | Iou Rv | Iou Ra | Iou La | Iou Vs | Iou As | Iou Mk | Iou Tk | Iou Asd | Iou Vsd | Iou Ak |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:------------:|:------------:|:-----------:|:-------------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:-------:|:-------:|:------:|
| 2.5423 | 2.5 | 100 | 2.6367 | 0.0332 | 0.0976 | 0.0951 | nan | 0.0612 | 0.0642 | 0.0301 | 0.1898 | 0.0 | 0.0 | 0.0086 | nan | 0.0495 | 0.4697 | 0.1033 | 0.0 | 0.0573 | 0.0485 | 0.0262 | 0.1021 | 0.0 | 0.0 | 0.0019 | 0.0 | 0.0204 | 0.0612 | 0.0812 |
| 2.3042 | 5.0 | 200 | 2.3925 | 0.0604 | 0.1412 | 0.1975 | nan | 0.2435 | 0.0655 | 0.1292 | 0.2869 | 0.0 | 0.0 | 0.0046 | nan | 0.0669 | 0.4894 | 0.1258 | 0.0 | 0.2144 | 0.0516 | 0.1074 | 0.1515 | 0.0 | 0.0 | 0.0017 | 0.0 | 0.0243 | 0.0670 | 0.1063 |
| 2.0869 | 7.5 | 300 | 2.2183 | 0.0932 | 0.1839 | 0.3354 | nan | 0.5208 | 0.0717 | 0.1836 | 0.4192 | 0.0 | 0.0 | 0.0006 | nan | 0.0768 | 0.3608 | 0.2060 | 0.0 | 0.4077 | 0.0617 | 0.1436 | 0.2158 | 0.0 | 0.0 | 0.0003 | 0.0 | 0.0358 | 0.0787 | 0.1746 |
| 2.0559 | 10.0 | 400 | 2.0298 | 0.1110 | 0.2055 | 0.3886 | nan | 0.6144 | 0.1027 | 0.1815 | 0.4598 | 0.0 | 0.0 | 0.0005 | nan | 0.0909 | 0.3011 | 0.3041 | 0.0 | 0.4559 | 0.0880 | 0.1400 | 0.2409 | 0.0 | 0.0 | 0.0003 | 0.0 | 0.0534 | 0.1001 | 0.2538 |
| 1.9554 | 12.5 | 500 | 1.8871 | 0.1189 | 0.2111 | 0.4100 | nan | 0.6561 | 0.1004 | 0.1647 | 0.4900 | 0.0 | 0.0 | 0.0009 | nan | 0.0763 | 0.2611 | 0.3619 | 0.0 | 0.4739 | 0.0896 | 0.1263 | 0.2616 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0531 | 0.1207 | 0.3015 |
| 2.0181 | 15.0 | 600 | 1.7720 | 0.1247 | 0.2139 | 0.4199 | nan | 0.6735 | 0.1008 | 0.1723 | 0.4898 | 0.0 | 0.0 | 0.0 | nan | 0.0706 | 0.2349 | 0.3972 | 0.0 | 0.4860 | 0.0912 | 0.1293 | 0.2720 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0532 | 0.1386 | 0.3256 |
| 1.6723 | 17.5 | 700 | 1.7386 | 0.1258 | 0.2129 | 0.4251 | nan | 0.6860 | 0.1011 | 0.1724 | 0.5062 | 0.0 | 0.0 | 0.0 | nan | 0.0615 | 0.2167 | 0.3848 | 0.0 | 0.4927 | 0.0917 | 0.1304 | 0.2814 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0488 | 0.1426 | 0.3221 |
| 1.5613 | 20.0 | 800 | 1.7751 | 0.1269 | 0.2151 | 0.4322 | nan | 0.7050 | 0.1020 | 0.1730 | 0.5066 | 0.0 | 0.0 | 0.0 | nan | 0.0570 | 0.2288 | 0.3788 | 0.0 | 0.4990 | 0.0927 | 0.1308 | 0.2841 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0465 | 0.1502 | 0.3199 |
| 1.5653 | 22.5 | 900 | 1.7222 | 0.1272 | 0.2142 | 0.4277 | nan | 0.6924 | 0.1003 | 0.1794 | 0.5018 | 0.0 | 0.0 | 0.0 | nan | 0.0568 | 0.2295 | 0.3814 | 0.0 | 0.4969 | 0.0914 | 0.1341 | 0.2837 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0466 | 0.1523 | 0.3209 |
| 1.5196 | 25.0 | 1000 | 1.7138 | 0.1266 | 0.2136 | 0.4273 | nan | 0.6939 | 0.0982 | 0.1706 | 0.5041 | 0.0 | 0.0 | 0.0 | nan | 0.0557 | 0.2283 | 0.3849 | 0.0 | 0.4965 | 0.0899 | 0.1288 | 0.2845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0462 | 0.1513 | 0.3225 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
{"license": "other", "tags": ["vision", "image-segmentation", "generated_from_trainer"], "base_model": "nvidia/mit-b3", "model-index": [{"name": "segformer-b2-seed63-apr-13-v1", "results": []}]}
|
unreal-hug/segformer-b2-seed63-apr-13-v1
| null |
[
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b3",
"license:other",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:36:42+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #segformer #vision #image-segmentation #generated_from_trainer #base_model-nvidia/mit-b3 #license-other #endpoints_compatible #region-us
|
segformer-b2-seed63-apr-13-v1
=============================
This model is a fine-tuned version of nvidia/mit-b3 on the unreal-hug/REAL\_DATASET\_SEG\_401\_6\_lbls dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7138
* Mean Iou: 0.1266
* Mean Accuracy: 0.2136
* Overall Accuracy: 0.4273
* Accuracy Unlabeled: nan
* Accuracy Lv: 0.6939
* Accuracy Rv: 0.0982
* Accuracy Ra: 0.1706
* Accuracy La: 0.5041
* Accuracy Vs: 0.0
* Accuracy As: 0.0
* Accuracy Mk: 0.0
* Accuracy Tk: nan
* Accuracy Asd: 0.0557
* Accuracy Vsd: 0.2283
* Accuracy Ak: 0.3849
* Iou Unlabeled: 0.0
* Iou Lv: 0.4965
* Iou Rv: 0.0899
* Iou Ra: 0.1288
* Iou La: 0.2845
* Iou Vs: 0.0
* Iou As: 0.0
* Iou Mk: 0.0
* Iou Tk: 0.0
* Iou Asd: 0.0462
* Iou Vsd: 0.1513
* Iou Ak: 0.3225
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.05
* training\_steps: 1000
### Training results
### Framework versions
* Transformers 4.37.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
[
"TAGS\n#transformers #safetensors #segformer #vision #image-segmentation #generated_from_trainer #base_model-nvidia/mit-b3 #license-other #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
unconditional-image-generation
|
diffusers
|
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('way2mhemanth/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
|
way2mhemanth/sd-class-butterflies-32
| null |
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null |
2024-04-13T12:37:03+00:00
|
[] |
[] |
TAGS
#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
|
# Model Card for Unit 1 of the Diffusion Models Class
This model is a diffusion model for unconditional image generation of cute .
## Usage
|
[
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] |
[
"TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n",
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] |
null |
adapter-transformers
|
# Adapter `BigTMiami/G_adapter_compactor_classification_C_30` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/G_adapter_compactor_classification_C_30", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_helpfulness"]}
|
BigTMiami/G_adapter_compactor_classification_C_30
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
] | null |
2024-04-13T12:38:13+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
|
# Adapter 'BigTMiami/G_adapter_compactor_classification_C_30' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'BigTMiami/G_adapter_compactor_classification_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n",
"# Adapter 'BigTMiami/G_adapter_compactor_classification_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **BlackJack-v1**
This is a trained model of a **Q-Learning** agent playing **BlackJack-v1** .
## Usage
```python
model = load_from_hub(repo_id="nzdb70/BlackJack-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["BlackJack-v1", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "BlackJack-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "BlackJack-v1", "type": "BlackJack-v1"}, "metrics": [{"type": "mean_reward", "value": "-0.26 +/- 0.94", "name": "mean_reward", "verified": false}]}]}]}
|
nzdb70/BlackJack-v1
| null |
[
"BlackJack-v1",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-13T12:39:30+00:00
|
[] |
[] |
TAGS
#BlackJack-v1 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 BlackJack-v1
This is a trained model of a Q-Learning agent playing BlackJack-v1 .
## Usage
|
[
"# Q-Learning Agent playing1 BlackJack-v1\n This is a trained model of a Q-Learning agent playing BlackJack-v1 .\n\n ## Usage"
] |
[
"TAGS\n#BlackJack-v1 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 BlackJack-v1\n This is a trained model of a Q-Learning agent playing BlackJack-v1 .\n\n ## Usage"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ProrabVasili/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
|
ProrabVasili/q-FrozenLake-v1-4x4-noSlippery
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-13T12:39:46+00:00
|
[] |
[] |
TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
|
[
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
[
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
object-detection
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_v1_50
This model is a fine-tuned version of [ssamperr/results_hugging_face](https://huggingface.co/ssamperr/results_hugging_face) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "ssamperr/results_hugging_face", "model-index": [{"name": "detr_v1_50", "results": []}]}
|
ssamperr/detr_v1_50
| null |
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:ssamperr/results_hugging_face",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:41:37+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-ssamperr/results_hugging_face #license-apache-2.0 #endpoints_compatible #region-us
|
# detr_v1_50
This model is a fine-tuned version of ssamperr/results_hugging_face on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# detr_v1_50\n\nThis model is a fine-tuned version of ssamperr/results_hugging_face on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 50",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cpu\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-ssamperr/results_hugging_face #license-apache-2.0 #endpoints_compatible #region-us \n",
"# detr_v1_50\n\nThis model is a fine-tuned version of ssamperr/results_hugging_face on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 50",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cpu\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** lightontech
- **License:** apache-2.0
- **Finetuned from model :** SeaLLMs/SeaLLM-7B-v2
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "SeaLLMs/SeaLLM-7B-v2"}
|
lightontech/seallm-reviews
| null |
[
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:SeaLLMs/SeaLLM-7B-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:44:25+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #text-generation-inference #unsloth #mistral #trl #en #base_model-SeaLLMs/SeaLLM-7B-v2 #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: lightontech
- License: apache-2.0
- Finetuned from model : SeaLLMs/SeaLLM-7B-v2
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: lightontech\n- License: apache-2.0\n- Finetuned from model : SeaLLMs/SeaLLM-7B-v2\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #text-generation-inference #unsloth #mistral #trl #en #base_model-SeaLLMs/SeaLLM-7B-v2 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: lightontech\n- License: apache-2.0\n- Finetuned from model : SeaLLMs/SeaLLM-7B-v2\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]}
|
thinhle/seallm-reviews
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T12:44:26+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #unsloth #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #unsloth #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Lodo97/Test1
| null |
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T12:46:47+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# deep-wizard-7B-slerp
deep-wizard-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl)
* [deepseek-ai/deepseek-math-7b-instruct](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct)
## 🧩 Configuration
```yaml
models:
- model: deepseek-ai/deepseek-math-7b-base
# no parameters necessary for base model
- model: deepseek-ai/deepseek-math-7b-rl
parameters:
density: 0.5
weight: 0.7
- model: deepseek-ai/deepseek-math-7b-instruct
parameters:
density: 0.5
weight: 0.3
merge_method: dare_ties
base_model: deepseek-ai/deepseek-math-7b-base
parameters:
int8_mask: true
dtype: bfloat16
```
|
{"license": "other", "tags": ["merge", "mergekit", "lazymergekit", "deepseek-ai/deepseek-math-7b-rl", "deepseek-ai/deepseek-math-7b-instruct"]}
|
Eliorkalfon/deep-wizard-7B-slerp
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"deepseek-ai/deepseek-math-7b-rl",
"deepseek-ai/deepseek-math-7b-instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T12:48:16+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #deepseek-ai/deepseek-math-7b-rl #deepseek-ai/deepseek-math-7b-instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# deep-wizard-7B-slerp
deep-wizard-7B-slerp is a merge of the following models using mergekit:
* deepseek-ai/deepseek-math-7b-rl
* deepseek-ai/deepseek-math-7b-instruct
## Configuration
|
[
"# deep-wizard-7B-slerp\n\ndeep-wizard-7B-slerp is a merge of the following models using mergekit:\n* deepseek-ai/deepseek-math-7b-rl\n* deepseek-ai/deepseek-math-7b-instruct",
"## Configuration"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #deepseek-ai/deepseek-math-7b-rl #deepseek-ai/deepseek-math-7b-instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# deep-wizard-7B-slerp\n\ndeep-wizard-7B-slerp is a merge of the following models using mergekit:\n* deepseek-ai/deepseek-math-7b-rl\n* deepseek-ai/deepseek-math-7b-instruct",
"## Configuration"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results4
This model is a fine-tuned version of [jkhan447/sarcasm-detection-Bert-base-uncased-POS](https://huggingface.co/jkhan447/sarcasm-detection-Bert-base-uncased-POS) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6646
- Accuracy: 0.7032
- F1: 0.3681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "jkhan447/sarcasm-detection-Bert-base-uncased-POS", "model-index": [{"name": "results4", "results": []}]}
|
dianamihalache27/results4
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:jkhan447/sarcasm-detection-Bert-base-uncased-POS",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:48:32+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-jkhan447/sarcasm-detection-Bert-base-uncased-POS #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# results4
This model is a fine-tuned version of jkhan447/sarcasm-detection-Bert-base-uncased-POS on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6646
- Accuracy: 0.7032
- F1: 0.3681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# results4\n\nThis model is a fine-tuned version of jkhan447/sarcasm-detection-Bert-base-uncased-POS on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6646\n- Accuracy: 0.7032\n- F1: 0.3681",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-jkhan447/sarcasm-detection-Bert-base-uncased-POS #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# results4\n\nThis model is a fine-tuned version of jkhan447/sarcasm-detection-Bert-base-uncased-POS on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6646\n- Accuracy: 0.7032\n- F1: 0.3681",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ibivibiv/aegolius-acadicus-v1-30b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.Q2_K.gguf) | Q2_K | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.IQ3_XS.gguf) | IQ3_XS | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.Q3_K_S.gguf) | Q3_K_S | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.IQ3_S.gguf) | IQ3_S | 13.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.IQ3_M.gguf) | IQ3_M | 13.2 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.Q3_K_M.gguf) | Q3_K_M | 14.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.Q3_K_L.gguf) | Q3_K_L | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.IQ4_XS.gguf) | IQ4_XS | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.Q4_K_S.gguf) | Q4_K_S | 17.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.Q4_K_M.gguf) | Q4_K_M | 18.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.Q5_K_S.gguf) | Q5_K_S | 20.6 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.Q5_K_M.gguf) | Q5_K_M | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.Q6_K.gguf) | Q6_K | 24.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF/resolve/main/aegolius-acadicus-v1-30b.Q8_0.gguf) | Q8_0 | 31.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "llama2", "library_name": "transformers", "tags": ["moe", "moerge"], "base_model": "ibivibiv/aegolius-acadicus-v1-30b", "quantized_by": "mradermacher"}
|
mradermacher/aegolius-acadicus-v1-30b-GGUF
| null |
[
"transformers",
"gguf",
"moe",
"moerge",
"en",
"base_model:ibivibiv/aegolius-acadicus-v1-30b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:48:40+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #moe #moerge #en #base_model-ibivibiv/aegolius-acadicus-v1-30b #license-llama2 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #moe #moerge #en #base_model-ibivibiv/aegolius-acadicus-v1-30b #license-llama2 #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1686
- Accuracy: 0.942
- F1: 0.9421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0609 | 1.0 | 250 | 0.1693 | 0.939 | 0.9391 |
| 0.0544 | 2.0 | 500 | 0.1686 | 0.942 | 0.9421 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.942, "name": "Accuracy"}, {"type": "f1", "value": 0.9421167357895796, "name": "F1"}]}]}]}
|
mikarn/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-13T12:49:58+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1686
* Accuracy: 0.942
* F1: 0.9421
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu118
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
LLaMA-13B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
{"license": "other"}
|
JG22/decapoda-research-llama-13b
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-13T12:58:35+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #llama #text-generation #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
LLaMA-13B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
-----------------
LLaMA Model Card
================
Model details
-------------
Organization developing the model
The FAIR team of Meta AI.
Model date
LLaMA was trained between December. 2022 and Feb. 2023.
Model version
This is version 1 of the model.
Model type
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
Paper or resources for more information
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at URL
Citations details
URL
License
Non-commercial bespoke license
Where to send questions or comments about the model
Questions and comments about LLaMA can be sent via the GitHub repository of the project , by opening an issue.
Intended use
------------
Primary intended uses
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
Primary intended users
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
Out-of-scope use cases
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
Factors
-------
Relevant factors
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
Evaluation factors
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
Metrics
-------
Model performance measures
We use the following measure to evaluate the model:
* Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
* Exact match for question answering,
* The toxicity score from Perspective API on RealToxicityPrompts.
Decision thresholds
Not applicable.
Approaches to uncertainty and variability
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
Evaluation datasets
-------------------
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
Training dataset
----------------
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
Quantitative analysis
---------------------
Hyperparameters for the model architecture
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
\*Table 2 - Summary of LLama Model Performance on Reasoning tasks\*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
No: 1, Category: Gender, FAIR LLM: 70.6
No: 2, Category: Religion, FAIR LLM: 79
No: 3, Category: Race/Color, FAIR LLM: 57
No: 4, Category: Sexual orientation, FAIR LLM: 81
No: 5, Category: Age, FAIR LLM: 70.1
No: 6, Category: Nationality, FAIR LLM: 64.2
No: 7, Category: Disability, FAIR LLM: 66.7
No: 8, Category: Physical appearance, FAIR LLM: 77.8
No: 9, Category: Socioeconomic status, FAIR LLM: 71.5
No: , Category: LLaMA Average, FAIR LLM: 66.6
*Table 3 - Summary bias of our model output*
Ethical considerations
----------------------
Data
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
Human life
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
Mitigations
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
Risks and harms
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
Use cases
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
[] |
[
"TAGS\n#transformers #pytorch #llama #text-generation #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.