pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | mlx |
# mlx-community/OpenELM-450M-instruct
This model was converted to MLX format from [`apple/OpenELM-450M-instruct`]() using mlx-lm version **0.10.0**.
Refer to the [original model card](https://huggingface.co/apple/OpenELM-450M-instruct) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/OpenELM-450M-instruct")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"} | mlx-community/OpenELM-450M-Instruct | null | [
"mlx",
"safetensors",
"openelm",
"custom_code",
"license:other",
"region:us"
] | null | 2024-04-24T12:25:16+00:00 | [] | [] | TAGS
#mlx #safetensors #openelm #custom_code #license-other #region-us
|
# mlx-community/OpenELM-450M-instruct
This model was converted to MLX format from ['apple/OpenELM-450M-instruct']() using mlx-lm version 0.10.0.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/OpenELM-450M-instruct\nThis model was converted to MLX format from ['apple/OpenELM-450M-instruct']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #openelm #custom_code #license-other #region-us \n",
"# mlx-community/OpenELM-450M-instruct\nThis model was converted to MLX format from ['apple/OpenELM-450M-instruct']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | mlx |
# mlx-community/OpenELM-450M
This model was converted to MLX format from [`apple/OpenELM-450M`]() using mlx-lm version **0.10.0**.
Refer to the [original model card](https://huggingface.co/apple/OpenELM-450M) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/OpenELM-450M")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"} | mlx-community/OpenELM-450M | null | [
"mlx",
"safetensors",
"openelm",
"custom_code",
"license:other",
"region:us"
] | null | 2024-04-24T12:25:37+00:00 | [] | [] | TAGS
#mlx #safetensors #openelm #custom_code #license-other #region-us
|
# mlx-community/OpenELM-450M
This model was converted to MLX format from ['apple/OpenELM-450M']() using mlx-lm version 0.10.0.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/OpenELM-450M\nThis model was converted to MLX format from ['apple/OpenELM-450M']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #openelm #custom_code #license-other #region-us \n",
"# mlx-community/OpenELM-450M\nThis model was converted to MLX format from ['apple/OpenELM-450M']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_4iters_bs256_sample2_iter_3
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2](https://huggingface.co/ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2", "model-index": [{"name": "0.001_ablation_4iters_bs256_sample2_iter_3", "results": []}]} | ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T12:25:38+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_ablation_4iters_bs256_sample2_iter_3
This model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.001_ablation_4iters_bs256_sample2_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_ablation_4iters_bs256_sample2_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | mlx |
# mlx-community/OpenELM-1_1B-instruct-4bit
This model was converted to MLX format from [`apple/OpenELM-1_1B-instruct`]() using mlx-lm version **0.10.0**.
Refer to the [original model card](https://huggingface.co/apple/OpenELM-1_1B-instruct) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/OpenELM-1_1B-instruct-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"} | mlx-community/OpenELM-1_1B-Instruct-4bit | null | [
"mlx",
"safetensors",
"openelm",
"custom_code",
"license:other",
"region:us"
] | null | 2024-04-24T12:26:16+00:00 | [] | [] | TAGS
#mlx #safetensors #openelm #custom_code #license-other #region-us
|
# mlx-community/OpenELM-1_1B-instruct-4bit
This model was converted to MLX format from ['apple/OpenELM-1_1B-instruct']() using mlx-lm version 0.10.0.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/OpenELM-1_1B-instruct-4bit\nThis model was converted to MLX format from ['apple/OpenELM-1_1B-instruct']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #openelm #custom_code #license-other #region-us \n",
"# mlx-community/OpenELM-1_1B-instruct-4bit\nThis model was converted to MLX format from ['apple/OpenELM-1_1B-instruct']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | mlx |
# mlx-community/OpenELM-1_1B-instruct-8bit
This model was converted to MLX format from [`apple/OpenELM-1_1B-instruct`]() using mlx-lm version **0.10.0**.
Refer to the [original model card](https://huggingface.co/apple/OpenELM-1_1B-instruct) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/OpenELM-1_1B-instruct-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"} | mlx-community/OpenELM-1_1B-Instruct-8bit | null | [
"mlx",
"safetensors",
"openelm",
"custom_code",
"license:other",
"region:us"
] | null | 2024-04-24T12:26:56+00:00 | [] | [] | TAGS
#mlx #safetensors #openelm #custom_code #license-other #region-us
|
# mlx-community/OpenELM-1_1B-instruct-8bit
This model was converted to MLX format from ['apple/OpenELM-1_1B-instruct']() using mlx-lm version 0.10.0.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/OpenELM-1_1B-instruct-8bit\nThis model was converted to MLX format from ['apple/OpenELM-1_1B-instruct']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #openelm #custom_code #license-other #region-us \n",
"# mlx-community/OpenELM-1_1B-instruct-8bit\nThis model was converted to MLX format from ['apple/OpenELM-1_1B-instruct']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | mlx |
# mlx-community/OpenELM-1_1B-4bit
This model was converted to MLX format from [`apple/OpenELM-1_1B`]() using mlx-lm version **0.10.0**.
Refer to the [original model card](https://huggingface.co/apple/OpenELM-1_1B) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/OpenELM-1_1B-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"} | mlx-community/OpenELM-1_1B-4bit | null | [
"mlx",
"safetensors",
"openelm",
"custom_code",
"license:other",
"region:us"
] | null | 2024-04-24T12:27:14+00:00 | [] | [] | TAGS
#mlx #safetensors #openelm #custom_code #license-other #region-us
|
# mlx-community/OpenELM-1_1B-4bit
This model was converted to MLX format from ['apple/OpenELM-1_1B']() using mlx-lm version 0.10.0.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/OpenELM-1_1B-4bit\nThis model was converted to MLX format from ['apple/OpenELM-1_1B']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #openelm #custom_code #license-other #region-us \n",
"# mlx-community/OpenELM-1_1B-4bit\nThis model was converted to MLX format from ['apple/OpenELM-1_1B']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
automatic-speech-recognition | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mihaj/w2v-bert-karelian-CodeSwitching | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:27:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2-bert #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2-bert #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | mlx |
# mlx-community/OpenELM-1_1B-8bit
This model was converted to MLX format from [`apple/OpenELM-1_1B`]() using mlx-lm version **0.10.0**.
Refer to the [original model card](https://huggingface.co/apple/OpenELM-1_1B) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/OpenELM-1_1B-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"} | mlx-community/OpenELM-1_1B-8bit | null | [
"mlx",
"safetensors",
"openelm",
"custom_code",
"license:other",
"region:us"
] | null | 2024-04-24T12:27:41+00:00 | [] | [] | TAGS
#mlx #safetensors #openelm #custom_code #license-other #region-us
|
# mlx-community/OpenELM-1_1B-8bit
This model was converted to MLX format from ['apple/OpenELM-1_1B']() using mlx-lm version 0.10.0.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/OpenELM-1_1B-8bit\nThis model was converted to MLX format from ['apple/OpenELM-1_1B']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #openelm #custom_code #license-other #region-us \n",
"# mlx-community/OpenELM-1_1B-8bit\nThis model was converted to MLX format from ['apple/OpenELM-1_1B']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unsloth_checkpoints
This model is a fine-tuned version of [unsloth/codellama-7b-bnb-4bit](https://huggingface.co/unsloth/codellama-7b-bnb-4bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "unsloth", "generated_from_trainer"], "base_model": "unsloth/codellama-7b-bnb-4bit", "model-index": [{"name": "unsloth_checkpoints", "results": []}]} | MakTek/pine_script_code_llama_last | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/codellama-7b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T12:29:07+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/codellama-7b-bnb-4bit #license-apache-2.0 #region-us
|
# unsloth_checkpoints
This model is a fine-tuned version of unsloth/codellama-7b-bnb-4bit on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# unsloth_checkpoints\n\nThis model is a fine-tuned version of unsloth/codellama-7b-bnb-4bit on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.1.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/codellama-7b-bnb-4bit #license-apache-2.0 #region-us \n",
"# unsloth_checkpoints\n\nThis model is a fine-tuned version of unsloth/codellama-7b-bnb-4bit on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.1.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
token-classification | transformers |
<p align="center">
<br>
<img src="http://www.ixa.eus/sites/default/files/anitdote.png" style="width: 45%;">
<be>
# mDeBERTa-base for Multilingual Correct Explanation Extraction in the Medical Domain
This model is a fine-tuned version of [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) for a **novel extractive task**
which consists of **identifying the explanation of the correct answer** written by medical doctors. The model
has been fine-tuned using the multilingual [https://huggingface.co/datasets/HiTZ/casimedicos-squad](https://huggingface.co/datasets/HiTZ/casimedicos-squad) dataset.
## Performance
F1 partial match scores (as defined in [SQuAD extractive QA task](https://huggingface.co/datasets/rajpurkar/squad_v2) are reported in the following
table:
<img src="https://raw.githubusercontent.com/hitz-zentroa/multilingual-abstrct/main/resources/multilingual-abstrct-results.png" style="width: 75%;">
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
**Contact**: [Anar Yeginbergen](https://ixa.ehu.eus/node/13807?language=en) and [Rodrigo Agerri](https://ragerri.github.io/)
HiTZ Center - Ixa, University of the Basque Country UPV/EHU | {"language": ["en", "es", "fr", "it"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["HiTZ/casimedicos-squad"], "metrics": ["f1"], "pipeline_tag": "token-classification", "widget": [{"text": "Paradoxical pulse is a drop in blood pressure > 10 mmHg during inspiration; it represents an exaggeration of the physiological phenomenon consisting of inspiratory lowering of BP (normal up to 10 mmHg). In cardiac tamponade, inspiration, which causes an increase in blood flow to the right chambers, increasing their volume, secondarily causes a displacement of the interventricular septum to the left, so that the left heart lodges and expels less blood during systole and the pulse, therefore, decreases. In a normal heart this exaggerated displacement, caused by the pressure exerted by the tamponade on the RV free wall, does not occur. Sinus X represents the systolic collapse of the venous pulse, i.e., the pressure drop due to atrial relaxation (also partly due to a downward displacement of the RV base during systole). Sinus Y represents the diastolic collapse of the venous pulse, i.e., the pressure drop that occurs from the moment blood enters the tricuspid valve into the ventricle. In cardiac tamponade, the deep sinus X is characteristic. In constrictive pericarditis, the deep Y sinus. For all these reasons, the correct answer is 5."}]} | HiTZ/mdeberta-expl-extraction-multi | null | [
"transformers",
"safetensors",
"deberta-v2",
"question-answering",
"token-classification",
"en",
"es",
"fr",
"it",
"dataset:HiTZ/casimedicos-squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:30:40+00:00 | [] | [
"en",
"es",
"fr",
"it"
] | TAGS
#transformers #safetensors #deberta-v2 #question-answering #token-classification #en #es #fr #it #dataset-HiTZ/casimedicos-squad #license-apache-2.0 #endpoints_compatible #region-us
|
<p align="center">
<br>
<img src="URL style="width: 45%;">
<be>
# mDeBERTa-base for Multilingual Correct Explanation Extraction in the Medical Domain
This model is a fine-tuned version of mdeberta-v3-base for a novel extractive task
which consists of identifying the explanation of the correct answer written by medical doctors. The model
has been fine-tuned using the multilingual URL dataset.
## Performance
F1 partial match scores (as defined in SQuAD extractive QA task are reported in the following
table:
<img src="URL style="width: 75%;">
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
Contact: Anar Yeginbergen and Rodrigo Agerri
HiTZ Center - Ixa, University of the Basque Country UPV/EHU | [
"# mDeBERTa-base for Multilingual Correct Explanation Extraction in the Medical Domain\n\n\nThis model is a fine-tuned version of mdeberta-v3-base for a novel extractive task\nwhich consists of identifying the explanation of the correct answer written by medical doctors. The model\nhas been fine-tuned using the multilingual URL dataset.",
"## Performance\n\nF1 partial match scores (as defined in SQuAD extractive QA task are reported in the following\ntable:\n\n<img src=\"URL style=\"width: 75%;\">",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2\n\nContact: Anar Yeginbergen and Rodrigo Agerri\nHiTZ Center - Ixa, University of the Basque Country UPV/EHU"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #question-answering #token-classification #en #es #fr #it #dataset-HiTZ/casimedicos-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# mDeBERTa-base for Multilingual Correct Explanation Extraction in the Medical Domain\n\n\nThis model is a fine-tuned version of mdeberta-v3-base for a novel extractive task\nwhich consists of identifying the explanation of the correct answer written by medical doctors. The model\nhas been fine-tuned using the multilingual URL dataset.",
"## Performance\n\nF1 partial match scores (as defined in SQuAD extractive QA task are reported in the following\ntable:\n\n<img src=\"URL style=\"width: 75%;\">",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2\n\nContact: Anar Yeginbergen and Rodrigo Agerri\nHiTZ Center - Ixa, University of the Basque Country UPV/EHU"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-chat-hf"} | Dewa/funny-llama | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-24T12:31:37+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_4iters_bs256_nodpo_sample2_iter_2
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1](https://huggingface.co/ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1", "model-index": [{"name": "0.001_ablation_4iters_bs256_nodpo_sample2_iter_2", "results": []}]} | ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T12:32:05+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_ablation_4iters_bs256_nodpo_sample2_iter_2
This model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.001_ablation_4iters_bs256_nodpo_sample2_iter_2\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_ablation_4iters_bs256_nodpo_sample2_iter_2\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | null | **"It keeps getting better!"**
"One of the top recent performers in the **Chaiverse Leaderboard**!"
GGUF-IQ-Imatrix quants for [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B).
> [!IMPORTANT]
> **Updated!**
> These quants have been redone with the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920) in mind. <br>
> Use **KoboldCpp version 1.64** or higher.
> [!WARNING]
> Compatible SillyTavern presets [here (recommended/simple)](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B/tree/main/Official%20Poppy%20Porpoise%20ST%20Presets)) or [here (Virt's)](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
> Use the latest version of KoboldCpp. **Use the provided presets.** <br>
> This is all still highly experimental, let the authors know how it performs for you, feedback is more important than ever now.
> [!NOTE]
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes.
**Original model information:**

# Update: Vision/multimodal capabilities again!
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
# To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj
* You can load the **mmproj** by using the corresponding section in the interface:
 | {"language": ["en"], "tags": ["roleplay", "llama3", "sillytavern"]} | Lewdiculous/Poppy_Porpoise-v0.7-L3-8B-GGUF-IQ-Imatrix | null | [
"gguf",
"roleplay",
"llama3",
"sillytavern",
"en",
"region:us"
] | null | 2024-04-24T12:32:05+00:00 | [] | [
"en"
] | TAGS
#gguf #roleplay #llama3 #sillytavern #en #region-us
| "It keeps getting better!"
"One of the top recent performers in the Chaiverse Leaderboard!"
GGUF-IQ-Imatrix quants for ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B.
> [!IMPORTANT]
> Updated!
> These quants have been redone with the fixes from URL in mind. <br>
> Use KoboldCpp version 1.64 or higher.
> [!WARNING]
> Compatible SillyTavern presets here (recommended/simple)) or here (Virt's). <br>
> Use the latest version of KoboldCpp. Use the provided presets. <br>
> This is all still highly experimental, let the authors know how it performs for you, feedback is more important than ever now.
> [!NOTE]
> For 8GB VRAM GPUs, I recommend the Q4_K_M-imat quant for up to 12288 context sizes.
Original model information:
!image/png
# Update: Vision/multimodal capabilities again!
If you want to use vision functionality:
* You must use the latest versions of Koboldcpp.
# To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. URL
* You can load the mmproj by using the corresponding section in the interface:
!image/png | [
"# Update: Vision/multimodal capabilities again!\n\n If you want to use vision functionality:\n\n * You must use the latest versions of Koboldcpp.",
"# To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. URL\n \n * You can load the mmproj by using the corresponding section in the interface:\n\n !image/png"
] | [
"TAGS\n#gguf #roleplay #llama3 #sillytavern #en #region-us \n",
"# Update: Vision/multimodal capabilities again!\n\n If you want to use vision functionality:\n\n * You must use the latest versions of Koboldcpp.",
"# To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. URL\n \n * You can load the mmproj by using the corresponding section in the interface:\n\n !image/png"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/stable-lol | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T12:32:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** ogdanneedham
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | ogdanneedham/mistral-gs-big-lora | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:33:26+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: ogdanneedham
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: ogdanneedham\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: ogdanneedham\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Lakoc/voxpopuli_bpe30_cz | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:35:23+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # kukulemon-32K-7B-GGUF
These are GGUF quants of a proof of concept a merge capable of functional 32K context length while being derived from [kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B).
The functioning 32K context window has been folded in via a merger of Mistral 7B v0.2 models.
SLERP merge appears to be viable, but DARE-TIES merge risks producing a damaged model and is therefore not recommended.
Although the resulting model natively supports Alpaca prompt, I've tested with ChatML prompts successfuly. Medium temperature (around 1) with low minP (e.g., 0.01) works with ChatML prompts in my most recent testing.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
- Full weights: [grimjim/kukulemon-32K-7B](https://huggingface.co/grimjim/kukulemon-32K-7B)
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B](https://huggingface.co/grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B)
* [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: grimjim/kukulemon-7B
layer_range: [0, 32]
- model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
layer_range: [0, 32]
# or, the equivalent models: syntax:
# models:
merge_method: slerp
base_model: grimjim/kukulemon-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
| {"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B", "grimjim/kukulemon-7B"], "pipeline_tag": "text-generation"} | grimjim/kukulemon-32K-7B-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B",
"base_model:grimjim/kukulemon-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:35:55+00:00 | [] | [] | TAGS
#transformers #gguf #mergekit #merge #text-generation #base_model-grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B #base_model-grimjim/kukulemon-7B #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| # kukulemon-32K-7B-GGUF
These are GGUF quants of a proof of concept a merge capable of functional 32K context length while being derived from kukulemon-7B.
The functioning 32K context window has been folded in via a merger of Mistral 7B v0.2 models.
SLERP merge appears to be viable, but DARE-TIES merge risks producing a damaged model and is therefore not recommended.
Although the resulting model natively supports Alpaca prompt, I've tested with ChatML prompts successfuly. Medium temperature (around 1) with low minP (e.g., 0.01) works with ChatML prompts in my most recent testing.
This is a merge of pre-trained language models created using mergekit.
- Full weights: grimjim/kukulemon-32K-7B
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
* grimjim/kukulemon-7B
### Configuration
The following YAML configuration was used to produce this model:
| [
"# kukulemon-32K-7B-GGUF\n\nThese are GGUF quants of a proof of concept a merge capable of functional 32K context length while being derived from kukulemon-7B. \nThe functioning 32K context window has been folded in via a merger of Mistral 7B v0.2 models.\nSLERP merge appears to be viable, but DARE-TIES merge risks producing a damaged model and is therefore not recommended.\n\nAlthough the resulting model natively supports Alpaca prompt, I've tested with ChatML prompts successfuly. Medium temperature (around 1) with low minP (e.g., 0.01) works with ChatML prompts in my most recent testing.\n\nThis is a merge of pre-trained language models created using mergekit.\n\n- Full weights: grimjim/kukulemon-32K-7B",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B\n* grimjim/kukulemon-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #gguf #mergekit #merge #text-generation #base_model-grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B #base_model-grimjim/kukulemon-7B #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# kukulemon-32K-7B-GGUF\n\nThese are GGUF quants of a proof of concept a merge capable of functional 32K context length while being derived from kukulemon-7B. \nThe functioning 32K context window has been folded in via a merger of Mistral 7B v0.2 models.\nSLERP merge appears to be viable, but DARE-TIES merge risks producing a damaged model and is therefore not recommended.\n\nAlthough the resulting model natively supports Alpaca prompt, I've tested with ChatML prompts successfuly. Medium temperature (around 1) with low minP (e.g., 0.01) works with ChatML prompts in my most recent testing.\n\nThis is a merge of pre-trained language models created using mergekit.\n\n- Full weights: grimjim/kukulemon-32K-7B",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B\n* grimjim/kukulemon-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Lakoc/voxpopuli_bpe25_cz | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:36:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
question-answering | transformers |
---
license: cc-by-4.0
language:
- es
tags:
- casimedicos
- explainability
- medical exams
- medical question answering
- extractive question answering
- squad
- multilinguality
- LLMs
- LLM
pretty_name: mdeberta-expl-extraction-multi
task_categories:
- question-answering
size_categories:
- 1K<n<10K
---
<p align="center">
<br>
<img src="http://www.ixa.eus/sites/default/files/anitdote.png" style="height: 200px;">
<br>
# mdeberta-v3-base finetuned for Explanatory Argument Extraction
We finetuned mdeberta-v3-base on a **novel extractive task** which consists of **identifying the explanation of the correct answer written by
medical doctors in medical exams**.
The training data is based on [Antidote CasiMedicos](https://huggingface.co/datasets/HiTZ/casimedicos-squad) for EN,ES,FR,IT languages.
The data source consists of Resident Medical Intern or Médico Interno Residente (MIR) exams, originally
created by [CasiMedicos](https://www.casimedicos.com), a Spanish community of medical professionals who collaboratively, voluntarily,
and free of charge, publishes written explanations about the possible answers included in the MIR exams. The aim is to generate a resource that
helps future medical doctors to study towards the MIR examinations. The commented MIR exams, including the explanations, are published in the [CasiMedicos
Project MIR 2.0 website](https://www.casimedicos.com/mir-2-0/).
We have extracted, clean, structure and annotated the available data so that each document in **casimedicos-squad** includes the clinical case, the correct answer,
the multiple-choice questions and the commented exam written by native Spanish medical doctors. The comments have been annotated with the span in the text that
corresponds to the explanation of the correct answer (see example below).
<table style="width:33%">
<tr>
<th>casimedicos-squad splits</th>
<tr>
<td>train</td>
<td>404</td>
</tr>
<tr>
<td>validation</td>
<td>56</td>
</tr>
<tr>
<td>test</td>
<td>119</td>
</tr>
</table>
## Example
<p align="center">
<img src="https://github.com/ixa-ehu/antidote-casimedicos/blob/main/casimedicos-exp.png?raw=true" style="height: 650px;">
</p>
The example above shows a document in CasiMedicos containing the textual content, including Clinical Case (C), Question (Q), Possible Answers (P),
and Explanation (E). Furthermore, for **casimedicos-squad** we annotated the span in the explanation (E) that corresponds to the correct answer (A).
The process of manually annotating the corpus consisted of specifying where the explanations of the correct answers begin and end.
In order to obtain grammatically complete correct answer explanations, annotating full sentences or subordinate clauses was preferred over
shorter spans.
## Data Explanation
The dataset is structured as a list of documents ("paragraphs") where each of them include:
- **context**: the explanation (E) in the document
- **qas**: list of possible answers and questions. This element contains:
- **answers**: an answer which corresponds to the explanation of the correct answer (A)
- **question**: the clinical case (C) and question (Q)
- **id**: unique identifier for the document
## Citation
If you use this data please **cite the following paper**:
```bibtex
@misc{goenaga2023explanatory,
title={Explanatory Argument Extraction of Correct Answers in Resident Medical Exams},
author={Iakes Goenaga and Aitziber Atutxa and Koldo Gojenola and Maite Oronoz and Rodrigo Agerri},
year={2023},
eprint={2312.00567},
archivePrefix={arXiv}
}
```
**Contact**: [Iakes Goenaga](http://www.hitz.eus/es/node/65) and [Rodrigo Agerri](https://ragerri.github.io/)
HiTZ Center - Ixa, University of the Basque Country UPV/EHU
### Model Description
- 📖 **Paper**:[Explanatory Argument Extraction of Correct Answers in Resident Medical Exams](https://arxiv.org/abs/2312.00567)
- 💻 **Github Repo** (Data and Code): [https://github.com/ixa-ehu/antidote-casimedicos](https://github.com/ixa-ehu/antidote-casimedicos)
- 🌐 **Project Website**: [https://univ-cotedazur.eu/antidote](https://univ-cotedazur.eu/antidote)
- **Funding**: CHIST-ERA XAI 2019 call. Antidote (PCI2020-120717-2) funded by MCIN/AEI /10.13039/501100011033 and by European Union NextGenerationEU/PRTR
- **Language(s) (NLP):** EN,ES,FR,IT
- **License:** Apache License 2
- **Finetuned from model:** microsoft/mdeberta-v3-base
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"license": "apache-2.0"} | HiTZ/xlm-roberta-large-expl-extraction-multi | null | [
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"arxiv:2312.00567",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:36:51+00:00 | [
"2312.00567",
"1910.09700"
] | [] | TAGS
#transformers #safetensors #xlm-roberta #question-answering #arxiv-2312.00567 #arxiv-1910.09700 #license-apache-2.0 #endpoints_compatible #region-us
|
---
license: cc-by-4.0
language:
- es
tags:
- casimedicos
- explainability
- medical exams
- medical question answering
- extractive question answering
- squad
- multilinguality
- LLMs
- LLM
pretty_name: mdeberta-expl-extraction-multi
task_categories:
- question-answering
size_categories:
- 1K<n<10K
---
<p align="center">
<br>
<img src="URL style="height: 200px;">
<br>
# mdeberta-v3-base finetuned for Explanatory Argument Extraction
We finetuned mdeberta-v3-base on a novel extractive task which consists of identifying the explanation of the correct answer written by
medical doctors in medical exams.
The training data is based on Antidote CasiMedicos for EN,ES,FR,IT languages.
The data source consists of Resident Medical Intern or Médico Interno Residente (MIR) exams, originally
created by CasiMedicos, a Spanish community of medical professionals who collaboratively, voluntarily,
and free of charge, publishes written explanations about the possible answers included in the MIR exams. The aim is to generate a resource that
helps future medical doctors to study towards the MIR examinations. The commented MIR exams, including the explanations, are published in the CasiMedicos
Project MIR 2.0 website.
We have extracted, clean, structure and annotated the available data so that each document in casimedicos-squad includes the clinical case, the correct answer,
the multiple-choice questions and the commented exam written by native Spanish medical doctors. The comments have been annotated with the span in the text that
corresponds to the explanation of the correct answer (see example below).
<table style="width:33%">
<tr>
<th>casimedicos-squad splits</th>
<tr>
<td>train</td>
<td>404</td>
</tr>
<tr>
<td>validation</td>
<td>56</td>
</tr>
<tr>
<td>test</td>
<td>119</td>
</tr>
</table>
## Example
<p align="center">
<img src="URL style="height: 650px;">
</p>
The example above shows a document in CasiMedicos containing the textual content, including Clinical Case (C), Question (Q), Possible Answers (P),
and Explanation (E). Furthermore, for casimedicos-squad we annotated the span in the explanation (E) that corresponds to the correct answer (A).
The process of manually annotating the corpus consisted of specifying where the explanations of the correct answers begin and end.
In order to obtain grammatically complete correct answer explanations, annotating full sentences or subordinate clauses was preferred over
shorter spans.
## Data Explanation
The dataset is structured as a list of documents ("paragraphs") where each of them include:
- context: the explanation (E) in the document
- qas: list of possible answers and questions. This element contains:
- answers: an answer which corresponds to the explanation of the correct answer (A)
- question: the clinical case (C) and question (Q)
- id: unique identifier for the document
If you use this data please cite the following paper:
Contact: Iakes Goenaga and Rodrigo Agerri
HiTZ Center - Ixa, University of the Basque Country UPV/EHU
### Model Description
- Paper:Explanatory Argument Extraction of Correct Answers in Resident Medical Exams
- Github Repo (Data and Code): URL
- Project Website: URL
- Funding: CHIST-ERA XAI 2019 call. Antidote (PCI2020-120717-2) funded by MCIN/AEI /10.13039/501100011033 and by European Union NextGenerationEU/PRTR
- Language(s) (NLP): EN,ES,FR,IT
- License: Apache License 2
- Finetuned from model: microsoft/mdeberta-v3-base
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# mdeberta-v3-base finetuned for Explanatory Argument Extraction\n\nWe finetuned mdeberta-v3-base on a novel extractive task which consists of identifying the explanation of the correct answer written by\nmedical doctors in medical exams.\n\nThe training data is based on Antidote CasiMedicos for EN,ES,FR,IT languages.\n\nThe data source consists of Resident Medical Intern or Médico Interno Residente (MIR) exams, originally\ncreated by CasiMedicos, a Spanish community of medical professionals who collaboratively, voluntarily, \nand free of charge, publishes written explanations about the possible answers included in the MIR exams. The aim is to generate a resource that\nhelps future medical doctors to study towards the MIR examinations. The commented MIR exams, including the explanations, are published in the CasiMedicos \nProject MIR 2.0 website.\n\nWe have extracted, clean, structure and annotated the available data so that each document in casimedicos-squad includes the clinical case, the correct answer, \nthe multiple-choice questions and the commented exam written by native Spanish medical doctors. The comments have been annotated with the span in the text that\ncorresponds to the explanation of the correct answer (see example below).\n\n<table style=\"width:33%\">\n <tr>\n <th>casimedicos-squad splits</th>\n <tr>\n <td>train</td>\n <td>404</td>\n </tr>\n <tr>\n <td>validation</td>\n <td>56</td>\n </tr>\n <tr>\n <td>test</td>\n <td>119</td>\n </tr>\n </table>",
"## Example\n\n<p align=\"center\">\n<img src=\"URL style=\"height: 650px;\">\n</p>\n\nThe example above shows a document in CasiMedicos containing the textual content, including Clinical Case (C), Question (Q), Possible Answers (P), \nand Explanation (E). Furthermore, for casimedicos-squad we annotated the span in the explanation (E) that corresponds to the correct answer (A).\n\n The process of manually annotating the corpus consisted of specifying where the explanations of the correct answers begin and end. \n In order to obtain grammatically complete correct answer explanations, annotating full sentences or subordinate clauses was preferred over\n shorter spans.",
"## Data Explanation\n\nThe dataset is structured as a list of documents (\"paragraphs\") where each of them include:\n\n- context: the explanation (E) in the document\n- qas: list of possible answers and questions. This element contains:\n - answers: an answer which corresponds to the explanation of the correct answer (A)\n - question: the clinical case (C) and question (Q)\n - id: unique identifier for the document\n\nIf you use this data please cite the following paper:\n\n\n\nContact: Iakes Goenaga and Rodrigo Agerri\nHiTZ Center - Ixa, University of the Basque Country UPV/EHU",
"### Model Description\n\n\n- Paper:Explanatory Argument Extraction of Correct Answers in Resident Medical Exams\n- Github Repo (Data and Code): URL\n- Project Website: URL\n- Funding: CHIST-ERA XAI 2019 call. Antidote (PCI2020-120717-2) funded by MCIN/AEI /10.13039/501100011033 and by European Union NextGenerationEU/PRTR\n- Language(s) (NLP): EN,ES,FR,IT\n- License: Apache License 2\n- Finetuned from model: microsoft/mdeberta-v3-base",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #question-answering #arxiv-2312.00567 #arxiv-1910.09700 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# mdeberta-v3-base finetuned for Explanatory Argument Extraction\n\nWe finetuned mdeberta-v3-base on a novel extractive task which consists of identifying the explanation of the correct answer written by\nmedical doctors in medical exams.\n\nThe training data is based on Antidote CasiMedicos for EN,ES,FR,IT languages.\n\nThe data source consists of Resident Medical Intern or Médico Interno Residente (MIR) exams, originally\ncreated by CasiMedicos, a Spanish community of medical professionals who collaboratively, voluntarily, \nand free of charge, publishes written explanations about the possible answers included in the MIR exams. The aim is to generate a resource that\nhelps future medical doctors to study towards the MIR examinations. The commented MIR exams, including the explanations, are published in the CasiMedicos \nProject MIR 2.0 website.\n\nWe have extracted, clean, structure and annotated the available data so that each document in casimedicos-squad includes the clinical case, the correct answer, \nthe multiple-choice questions and the commented exam written by native Spanish medical doctors. The comments have been annotated with the span in the text that\ncorresponds to the explanation of the correct answer (see example below).\n\n<table style=\"width:33%\">\n <tr>\n <th>casimedicos-squad splits</th>\n <tr>\n <td>train</td>\n <td>404</td>\n </tr>\n <tr>\n <td>validation</td>\n <td>56</td>\n </tr>\n <tr>\n <td>test</td>\n <td>119</td>\n </tr>\n </table>",
"## Example\n\n<p align=\"center\">\n<img src=\"URL style=\"height: 650px;\">\n</p>\n\nThe example above shows a document in CasiMedicos containing the textual content, including Clinical Case (C), Question (Q), Possible Answers (P), \nand Explanation (E). Furthermore, for casimedicos-squad we annotated the span in the explanation (E) that corresponds to the correct answer (A).\n\n The process of manually annotating the corpus consisted of specifying where the explanations of the correct answers begin and end. \n In order to obtain grammatically complete correct answer explanations, annotating full sentences or subordinate clauses was preferred over\n shorter spans.",
"## Data Explanation\n\nThe dataset is structured as a list of documents (\"paragraphs\") where each of them include:\n\n- context: the explanation (E) in the document\n- qas: list of possible answers and questions. This element contains:\n - answers: an answer which corresponds to the explanation of the correct answer (A)\n - question: the clinical case (C) and question (Q)\n - id: unique identifier for the document\n\nIf you use this data please cite the following paper:\n\n\n\nContact: Iakes Goenaga and Rodrigo Agerri\nHiTZ Center - Ixa, University of the Basque Country UPV/EHU",
"### Model Description\n\n\n- Paper:Explanatory Argument Extraction of Correct Answers in Resident Medical Exams\n- Github Repo (Data and Code): URL\n- Project Website: URL\n- Funding: CHIST-ERA XAI 2019 call. Antidote (PCI2020-120717-2) funded by MCIN/AEI /10.13039/501100011033 and by European Union NextGenerationEU/PRTR\n- Language(s) (NLP): EN,ES,FR,IT\n- License: Apache License 2\n- Finetuned from model: microsoft/mdeberta-v3-base",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# tetrisblack/Mixllama3-8x8b-Instruct-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`sherazkhan/Mixllama3-8x8b-Instruct-v0.1`](https://huggingface.co/sherazkhan/Mixllama3-8x8b-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sherazkhan/Mixllama3-8x8b-Instruct-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo tetrisblack/Mixllama3-8x8b-Instruct-v0.1-Q4_K_M-GGUF --model mixllama3-8x8b-instruct-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo tetrisblack/Mixllama3-8x8b-Instruct-v0.1-Q4_K_M-GGUF --model mixllama3-8x8b-instruct-v0.1.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixllama3-8x8b-instruct-v0.1.Q4_K_M.gguf -n 128
```
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["text Generation", "llama-cpp", "gguf-my-repo"]} | tetrisblack/Mixllama3-8x8b-Instruct-v0.1-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"text Generation",
"llama-cpp",
"gguf-my-repo",
"en",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:38:03+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #text Generation #llama-cpp #gguf-my-repo #en #license-llama3 #endpoints_compatible #region-us
|
# tetrisblack/Mixllama3-8x8b-Instruct-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from 'sherazkhan/Mixllama3-8x8b-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# tetrisblack/Mixllama3-8x8b-Instruct-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'sherazkhan/Mixllama3-8x8b-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #text Generation #llama-cpp #gguf-my-repo #en #license-llama3 #endpoints_compatible #region-us \n",
"# tetrisblack/Mixllama3-8x8b-Instruct-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'sherazkhan/Mixllama3-8x8b-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
| {"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"} | LLMQ/LLaMA-3-8B-IR-QLoRA | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-04-24T12:38:20+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | sataayu/molt5-augmented-default-800-small-smiles2caption | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T12:40:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs_gptq_training
This model is a fine-tuned version of [astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit](https://huggingface.co/astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit", "model-index": [{"name": "outputs_gptq_training", "results": []}]} | WajeehaJ/outputs_gptq_training | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit",
"license:other",
"region:us"
] | null | 2024-04-24T12:42:13+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit #license-other #region-us
|
# outputs_gptq_training
This model is a fine-tuned version of astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# outputs_gptq_training\n\nThis model is a fine-tuned version of astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- training_steps: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit #license-other #region-us \n",
"# outputs_gptq_training\n\nThis model is a fine-tuned version of astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- training_steps: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_propaganda_model
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6799
- eval_precision: 0.0639
- eval_recall: 0.0725
- eval_f1: 0.0679
- eval_accuracy: 0.8635
- eval_runtime: 12.6134
- eval_samples_per_second: 66.516
- eval_steps_per_second: 4.202
- epoch: 8.0
- step: 1416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "my_awesome_propaganda_model", "results": []}]} | anismahmahi/my_awesome_propaganda_model | null | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:43:52+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #deberta-v2 #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# my_awesome_propaganda_model
This model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6799
- eval_precision: 0.0639
- eval_recall: 0.0725
- eval_f1: 0.0679
- eval_accuracy: 0.8635
- eval_runtime: 12.6134
- eval_samples_per_second: 66.516
- eval_steps_per_second: 4.202
- epoch: 8.0
- step: 1416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| [
"# my_awesome_propaganda_model\n\nThis model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.6799\n- eval_precision: 0.0639\n- eval_recall: 0.0725\n- eval_f1: 0.0679\n- eval_accuracy: 0.8635\n- eval_runtime: 12.6134\n- eval_samples_per_second: 66.516\n- eval_steps_per_second: 4.202\n- epoch: 8.0\n- step: 1416",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Framework versions\n\n- Transformers 4.30.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #deberta-v2 #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# my_awesome_propaganda_model\n\nThis model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.6799\n- eval_precision: 0.0639\n- eval_recall: 0.0725\n- eval_f1: 0.0679\n- eval_accuracy: 0.8635\n- eval_runtime: 12.6134\n- eval_samples_per_second: 66.516\n- eval_steps_per_second: 4.202\n- epoch: 8.0\n- step: 1416",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Framework versions\n\n- Transformers 4.30.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.13.3"
] |
text-to-image | diffusers | # matii-marronii
<Gallery />
## Model description
By Denche354
## Trigger words
You should use `DEN_matii_marronii` to trigger the image generation.
## Download model
Weights for this model are available in PyTorch format.
[Download](/MarkBW/matii-marronii/tree/main) them in the Files & versions tab.
| {"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "UNICODE\u0000\u0000D\u0000E\u0000N\u0000_\u0000m\u0000a\u0000t\u0000i\u0000i\u0000_\u0000m\u0000a\u0000r\u0000r\u0000o\u0000n\u0000i\u0000i\u0000,\u0000", "output": {"url": "images/00018-1424157527.jpeg"}}], "base_model": "runwayml/stable-diffusion-v1-5", "instance_prompt": "DEN_matii_marronii"} | MarkBW/matii-marronii | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"region:us"
] | null | 2024-04-24T12:44:31+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #region-us
| # matii-marronii
<Gallery />
## Model description
By Denche354
## Trigger words
You should use 'DEN_matii_marronii' to trigger the image generation.
## Download model
Weights for this model are available in PyTorch format.
Download them in the Files & versions tab.
| [
"# matii-marronii\n\n<Gallery />",
"## Model description \n\nBy Denche354",
"## Trigger words\n\nYou should use 'DEN_matii_marronii' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in PyTorch format.\n\nDownload them in the Files & versions tab."
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #region-us \n",
"# matii-marronii\n\n<Gallery />",
"## Model description \n\nBy Denche354",
"## Trigger words\n\nYou should use 'DEN_matii_marronii' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in PyTorch format.\n\nDownload them in the Files & versions tab."
] |
null | transformers |
## LLama 3 for router module in RAG (a toy example)
While developing complex RAG applications, I found a common need for router functionality to map user queries to different system workflows (and APIs). The router acts as a dispatcher that can enhance responsiveness and accuracy by choosing the best workflow or API based on the query context. This implies that we need to produce structured output from unstructured input text.
To this end, I undertook a simple exercise to fine-tune the new Llama 3 model to process text input and generate JSON-like output (here is the [colab](https://colab.research.google.com/drive/1Vj0LOjU_5N9VWLpY-AG91dgdGD88Vjwm?usp=sharing)). My hope was that we could avoid some external dependencies for this part of the system by seamlessly integrating various models to reinforce complex applications in production settings. I believed that building a robust critical infrastructure for the semantic modules required choosing the right LLM for a given task.
For training, we used structured data from [azizshaw](https://huggingface.co/datasets/azizshaw/text_to_json). The dataset contained 485 rows and included 'input', 'output', and 'instruction' columns.
For a quick evaluation, we used another dataset for text-to-JSON, the **Diverse Restricted JSON Data Extraction**, curated by the paraloq analytics team ([here](https://huggingface.co/datasets/paraloq/json_data_extraction)).
Run the model for inference:
```python
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"""
Convert this text into a JSON object. Create field names that meaningfully represent the data being reported.
It is extremely important that you construct a well-formed object.
""", # instruction
"**Medical Document** **Patient Information** * Patient ID: PT123456 * Name: Jane Doe * Date of Birth: 1980-01-01 * Gender: Female * Medical Conditions: * Asthma * Hypertension **Prescription Information** * Prescription ID: RX123456 * Date Prescribed: 2023-03-08 * Date Expires: 2023-09-07 * Status: Active **Medication Information** * Medication ID: MD123456 * Name: Albuterol * Dosage: 200 mcg * Units: mcg * Instructions: Inhale 2 puffs every 4-6 hours as needed for shortness of breath. * Refills: 3 **Pharmacy Information** * Pharmacy ID: PH123456 * Name: CVS Pharmacy * Address: 123 Main Street, Anytown, CA 12345 * Phone: (123) 456-7890 **Additional Information** * The patient has been using Albuterol for the past 5 years to manage her asthma. * The patient has been advised to use a spacer device with the Albuterol inhaler to improve the delivery of the medication to the lungs. * The patient should avoid using Albuterol more than 4 times per day. * The patient should contact her doctor if her asthma symptoms worsen or if she experiences any side effects from the medication. **Instructions for the Patient** * Take Albuterol exactly as prescribed by your doctor. * Do not take more than the prescribed dosage. * Use a spacer device with the Albuterol inhaler. * Avoid using Albuterol more than 4 times per day. * Contact your doctor if your asthma symptoms worsen or if you experience any side effects from the medication. **Signature** [Doctor's Name] [Date]", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 1000, use_cache = True)
tokenizer.batch_decode(outputs)
```
```
import json
text = "{'feature1': {'detail': {'text': 'Medical Document', 'pid': 'PT123456', 'name': 'Jane Doe', 'dob': '1980-01-01', 'gender': 'Female', 'conditions': ['Asthma', 'Hypertension']}, 'detail2': {'text': 'Prescription Information', 'pid': 'RX123456', 'date': '2023-03-08', 'expires': '2023-09-07','status': 'Active'}, 'detail3': {'text': 'Medication Information', 'id': 'MD123456', 'name': 'Albuterol', 'dosage': '200 mcg', 'units':'mcg', 'instructions': 'Inhale 2 puffs every 4-6 hours as needed for shortness of breath.','refills': '3'}, 'detail4': {'text': 'Pharmacy Information', 'id': 'PH123456', 'name': 'CVS Pharmacy', 'address': '123 Main Street, Anytown, CA 12345', 'phone': '(123) 456-7890'}}, 'feature2': {'detail': {'text': 'The patient has been using Albuterol for the past 5 years to manage her asthma.', 'pid': '', 'name': '', 'dob': '', 'gender': '', 'conditions': []}, 'detail2': {'text': 'The patient has been advised to use a spacer device with the Albuterol inhaler to improve the delivery of the medication to the lungs.', 'pid': '', 'name': '', 'date': '', 'expires': '','status': ''}, 'detail3': {'text': 'The patient should avoid using Albuterol more than 4 times per day.', 'id': '', 'name': '', 'dosage': '', 'units': '', 'instructions': '','refills': ''}, 'detail4': {'text': 'The patient should contact her doctor if her asthma symptoms worsen or if she experiences any side effects from the medication.', 'pid': '', 'name': '', 'address': '', 'phone': ''}}}"
output = text.replace("'", '"')
data_dict = json.loads(output)
len(data_dict)
pprint.pprint(data_dict['feature1'])
```
The result:
```
{'detail': {'conditions': ['Asthma', 'Hypertension'],
'dob': '1980-01-01',
'gender': 'Female',
'name': 'Jane Doe',
'pid': 'PT123456',
'text': 'Medical Document'},
'detail2': {'date': '2023-03-08',
'expires': '2023-09-07',
'pid': 'RX123456',
'status': 'Active',
'text': 'Prescription Information'},
'detail3': {'dosage': '200 mcg',
'id': 'MD123456',
'instructions': 'Inhale 2 puffs every 4-6 hours as needed for '
'shortness of breath.',
'name': 'Albuterol',
'refills': '3',
'text': 'Medication Information',
'units': 'mcg'},
'detail4': {'address': '123 Main Street, Anytown, CA 12345',
'id': 'PH123456',
'name': 'CVS Pharmacy',
'phone': '(123) 456-7890',
'text': 'Pharmacy Information'}}
```
## Results Notes
- Considering that we are working with a toy example (4-byte quantization model, tiny dataset for SFT), the results seem like a good starting point, credit for Llama 3.
- As we fine-tune the model with examples of strings using single quotes enclosed names, the model learns to use this notation, resulting in output generated with single quotes. This approach is far from optimal for securing our workflow and ensuring robust code.
- Another point to note is that the response tends to repeat information.
## Uploaded model
- **Developed by:** sccastillo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | sccastillo/llama3_router | null | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:48:26+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #gguf #llama #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
## LLama 3 for router module in RAG (a toy example)
While developing complex RAG applications, I found a common need for router functionality to map user queries to different system workflows (and APIs). The router acts as a dispatcher that can enhance responsiveness and accuracy by choosing the best workflow or API based on the query context. This implies that we need to produce structured output from unstructured input text.
To this end, I undertook a simple exercise to fine-tune the new Llama 3 model to process text input and generate JSON-like output (here is the colab). My hope was that we could avoid some external dependencies for this part of the system by seamlessly integrating various models to reinforce complex applications in production settings. I believed that building a robust critical infrastructure for the semantic modules required choosing the right LLM for a given task.
For training, we used structured data from azizshaw. The dataset contained 485 rows and included 'input', 'output', and 'instruction' columns.
For a quick evaluation, we used another dataset for text-to-JSON, the Diverse Restricted JSON Data Extraction, curated by the paraloq analytics team (here).
Run the model for inference:
The result:
## Results Notes
- Considering that we are working with a toy example (4-byte quantization model, tiny dataset for SFT), the results seem like a good starting point, credit for Llama 3.
- As we fine-tune the model with examples of strings using single quotes enclosed names, the model learns to use this notation, resulting in output generated with single quotes. This approach is far from optimal for securing our workflow and ensuring robust code.
- Another point to note is that the response tends to repeat information.
## Uploaded model
- Developed by: sccastillo
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
| [
"## LLama 3 for router module in RAG (a toy example)\n\nWhile developing complex RAG applications, I found a common need for router functionality to map user queries to different system workflows (and APIs). The router acts as a dispatcher that can enhance responsiveness and accuracy by choosing the best workflow or API based on the query context. This implies that we need to produce structured output from unstructured input text. \n\nTo this end, I undertook a simple exercise to fine-tune the new Llama 3 model to process text input and generate JSON-like output (here is the colab). My hope was that we could avoid some external dependencies for this part of the system by seamlessly integrating various models to reinforce complex applications in production settings. I believed that building a robust critical infrastructure for the semantic modules required choosing the right LLM for a given task.\n\nFor training, we used structured data from azizshaw. The dataset contained 485 rows and included 'input', 'output', and 'instruction' columns. \n\nFor a quick evaluation, we used another dataset for text-to-JSON, the Diverse Restricted JSON Data Extraction, curated by the paraloq analytics team (here).\n\nRun the model for inference:\n\n\n\n\nThe result:",
"## Results Notes\n\n- Considering that we are working with a toy example (4-byte quantization model, tiny dataset for SFT), the results seem like a good starting point, credit for Llama 3.\n- As we fine-tune the model with examples of strings using single quotes enclosed names, the model learns to use this notation, resulting in output generated with single quotes. This approach is far from optimal for securing our workflow and ensuring robust code.\n- Another point to note is that the response tends to repeat information.",
"## Uploaded model\n- Developed by: sccastillo\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library."
] | [
"TAGS\n#transformers #safetensors #gguf #llama #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"## LLama 3 for router module in RAG (a toy example)\n\nWhile developing complex RAG applications, I found a common need for router functionality to map user queries to different system workflows (and APIs). The router acts as a dispatcher that can enhance responsiveness and accuracy by choosing the best workflow or API based on the query context. This implies that we need to produce structured output from unstructured input text. \n\nTo this end, I undertook a simple exercise to fine-tune the new Llama 3 model to process text input and generate JSON-like output (here is the colab). My hope was that we could avoid some external dependencies for this part of the system by seamlessly integrating various models to reinforce complex applications in production settings. I believed that building a robust critical infrastructure for the semantic modules required choosing the right LLM for a given task.\n\nFor training, we used structured data from azizshaw. The dataset contained 485 rows and included 'input', 'output', and 'instruction' columns. \n\nFor a quick evaluation, we used another dataset for text-to-JSON, the Diverse Restricted JSON Data Extraction, curated by the paraloq analytics team (here).\n\nRun the model for inference:\n\n\n\n\nThe result:",
"## Results Notes\n\n- Considering that we are working with a toy example (4-byte quantization model, tiny dataset for SFT), the results seem like a good starting point, credit for Llama 3.\n- As we fine-tune the model with examples of strings using single quotes enclosed names, the model learns to use this notation, resulting in output generated with single quotes. This approach is far from optimal for securing our workflow and ensuring robust code.\n- Another point to note is that the response tends to repeat information.",
"## Uploaded model\n- Developed by: sccastillo\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | samzirbo/mT5.tokenizer.en-es.24K.30M | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:48:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# hus960/Unholy-Aura-Llama-3-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`ChaoticNeutrals/Unholy-Aura-Llama-3-8B`](https://huggingface.co/ChaoticNeutrals/Unholy-Aura-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ChaoticNeutrals/Unholy-Aura-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo hus960/Unholy-Aura-Llama-3-8B-Q4_K_M-GGUF --model unholy-aura-llama-3-8b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo hus960/Unholy-Aura-Llama-3-8B-Q4_K_M-GGUF --model unholy-aura-llama-3-8b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m unholy-aura-llama-3-8b.Q4_K_M.gguf -n 128
```
| {"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["Undi95/Llama-3-Unholy-8B", "ResplendentAI/Aura_L3_8B"]} | hus960/Unholy-Aura-Llama-3-8B-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Undi95/Llama-3-Unholy-8B",
"base_model:ResplendentAI/Aura_L3_8B",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:48:52+00:00 | [] | [] | TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-Undi95/Llama-3-Unholy-8B #base_model-ResplendentAI/Aura_L3_8B #endpoints_compatible #region-us
|
# hus960/Unholy-Aura-Llama-3-8B-Q4_K_M-GGUF
This model was converted to GGUF format from 'ChaoticNeutrals/Unholy-Aura-Llama-3-8B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# hus960/Unholy-Aura-Llama-3-8B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'ChaoticNeutrals/Unholy-Aura-Llama-3-8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-Undi95/Llama-3-Unholy-8B #base_model-ResplendentAI/Aura_L3_8B #endpoints_compatible #region-us \n",
"# hus960/Unholy-Aura-Llama-3-8B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'ChaoticNeutrals/Unholy-Aura-Llama-3-8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Likich/gemma-finetune-qualcoding | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:49:11+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"language": ["en"], "library_name": "transformers", "datasets": ["sohamslc5/curr1"], "metrics": ["accuracy"], "pipeline_tag": "text-generation", "base_model": "meta-llama/Llama-2-7b-chat-hf"} | sohamslc5/IIITA-Chatbot | null | [
"transformers",
"safetensors",
"text-generation",
"en",
"dataset:sohamslc5/curr1",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:49:21+00:00 | [
"1910.09700"
] | [
"en"
] | TAGS
#transformers #safetensors #text-generation #en #dataset-sohamslc5/curr1 #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #endpoints_compatible #region-us
| # Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #text-generation #en #dataset-sohamslc5/curr1 #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #endpoints_compatible #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | null |
GGUFs for https://huggingface.co/microsoft/Phi-3-mini-128k-instruct
iMatrix generated with Kalomaze's groups_merged.txt | {"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation"} | MarsupialAI/Phi-3-mini-128k-instruct_iMatrix_GGUF | null | [
"gguf",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us"
] | null | 2024-04-24T12:50:03+00:00 | [] | [
"en"
] | TAGS
#gguf #nlp #code #text-generation #en #license-mit #region-us
|
GGUFs for URL
iMatrix generated with Kalomaze's groups_merged.txt | [] | [
"TAGS\n#gguf #nlp #code #text-generation #en #license-mit #region-us \n"
] |
null | fastai |
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
| {"tags": ["fastai"]} | osvitore/delfinesoballenas | null | [
"fastai",
"region:us"
] | null | 2024-04-24T12:50:24+00:00 | [] | [] | TAGS
#fastai #region-us
|
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the documentation here)!
2. Create a demo in Gradio or Streamlit using Spaces (documentation here).
3. Join the fastai community on the Fastai Discord!
Greetings fellow fastlearner ! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
| [
"# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!",
"# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---",
"# Model card",
"## Model description\nMore information needed",
"## Intended uses & limitations\nMore information needed",
"## Training and evaluation data\nMore information needed"
] | [
"TAGS\n#fastai #region-us \n",
"# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!",
"# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---",
"# Model card",
"## Model description\nMore information needed",
"## Intended uses & limitations\nMore information needed",
"## Training and evaluation data\nMore information needed"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-2
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-2", "results": []}]} | AlignmentResearch/robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T12:51:12+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-2
This model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-3
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-3", "results": []}]} | AlignmentResearch/robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-3 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T12:51:13+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-3
This model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | MoGP/f_x | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:54:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | heyllm234/sc75 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T12:56:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | # pfnet-nekomata-14b-pfn-qfin-gguf
[pfnetさんが公開しているnekomata-14b-pfn-qfin](https://huggingface.co/pfnet/nekomata-14b-pfn-qfin)のggufフォーマット変換版です。
imatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。
## ライセンス
tongyi-qianwenライセンスになります。
[ご使用前にライセンスをご確認ください](https://huggingface.co/pfnet/nekomata-14b-pfn-qfin/blob/main/LICENSE)
## 他のモデル
[mmnga/pfnet-nekomata-14b-pfn-qfin-gguf](https://huggingface.co/mmnga/pfnet-nekomata-14b-pfn-qfin-gguf)
[mmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf](https://huggingface.co/mmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'pfnet-nekomata-14b-pfn-qfin-q4_0.gguf' -n 128 --temp 0.5 -p '### 指示:次の日本語を英語に翻訳してください。\n\n### 入力: 大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。 \n\n### 応答:'
``` | {"language": ["en", "ja"], "license": "other", "tags": ["qwen"], "datasets": ["TFMC/imatrix-dataset-for-japanese-llm"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/pfnet/nekomata-14b-pfn-qfin/blob/main/LICENSE"} | mmnga/pfnet-nekomata-14b-pfn-qfin-gguf | null | [
"gguf",
"qwen",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:other",
"region:us"
] | null | 2024-04-24T12:58:09+00:00 | [] | [
"en",
"ja"
] | TAGS
#gguf #qwen #en #ja #dataset-TFMC/imatrix-dataset-for-japanese-llm #license-other #region-us
| # pfnet-nekomata-14b-pfn-qfin-gguf
pfnetさんが公開しているnekomata-14b-pfn-qfinのggufフォーマット変換版です。
imatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。
## ライセンス
tongyi-qianwenライセンスになります。
ご使用前にライセンスをご確認ください
## 他のモデル
mmnga/pfnet-nekomata-14b-pfn-qfin-gguf
mmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf
## Usage
| [
"# pfnet-nekomata-14b-pfn-qfin-gguf \npfnetさんが公開しているnekomata-14b-pfn-qfinのggufフォーマット変換版です。\n\nimatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。",
"## ライセンス\ntongyi-qianwenライセンスになります。 \nご使用前にライセンスをご確認ください",
"## 他のモデル\nmmnga/pfnet-nekomata-14b-pfn-qfin-gguf \nmmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf",
"## Usage"
] | [
"TAGS\n#gguf #qwen #en #ja #dataset-TFMC/imatrix-dataset-for-japanese-llm #license-other #region-us \n",
"# pfnet-nekomata-14b-pfn-qfin-gguf \npfnetさんが公開しているnekomata-14b-pfn-qfinのggufフォーマット変換版です。\n\nimatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。",
"## ライセンス\ntongyi-qianwenライセンスになります。 \nご使用前にライセンスをご確認ください",
"## 他のモデル\nmmnga/pfnet-nekomata-14b-pfn-qfin-gguf \nmmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf",
"## Usage"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | MoGP/g_x | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:00:01+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers | **Github repo**: https://github.com/magic-research/piecewise-rectified-flow <br>
**PeRFlow accelerated SDXL-DreamShaper**: https://huggingface.co/Lykon/dreamshaper-xl-1-0
**Demo:**
```python
from pathlib import Path
import torch, torchvision
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline.from_pretrained("hansyan/perflow-sdxl-dreamshaper", torch_dtype=torch.float16, use_safetensors=True, variant="v0-fix")
from src.scheduler_perflow import PeRFlowScheduler
pipe.scheduler = PeRFlowScheduler.from_config(pipe.scheduler.config, prediction_type="ddim_eps", num_time_windows=4)
pipe.to("cuda", torch.float16)
prompts_list = [
["photorealistic, uhd, high resolution, high quality, highly detailed; RAW photo, a handsome man, wearing a black coat, outside, closeup face",
"distorted, blur, low-quality, haze, out of focus",],
["photorealistic, uhd, high resolution, high quality, highly detailed; masterpiece, A closeup face photo of girl, wearing a rain coat, in the street, heavy rain, bokeh,",
"distorted, blur, low-quality, haze, out of focus",],
["photorealistic, uhd, high resolution, high quality, highly detailed; RAW photo, a red luxury car, studio light",
"distorted, blur, low-quality, haze, out of focus",],
["photorealistic, uhd, high resolution, high quality, highly detailed; masterpiece, A beautiful cat bask in the sun",
"distorted, blur, low-quality, haze, out of focus",],
]
num_inference_steps = 6 # suggest steps >= num_win=4
cfg_scale_list = [2.0] # suggest values [1.5, 2.0, 2.5]
num_img = 2
seed = 42
for cfg_scale in cfg_scale_list:
for i, prompts in enumerate(prompts_list):
setup_seed(seed)
prompt, neg_prompt = prompts[0], prompts[1]
samples = pipe(
prompt = [prompt] * num_img,
negative_prompt = [neg_prompt] * num_img,
height = 1024,
width = 1024,
num_inference_steps = num_inference_steps,
guidance_scale = cfg_scale,
output_type = 'pt',
).images
cfg_int = int(cfg_scale); cfg_float = int(cfg_scale*10 - cfg_int*10)
save_name = f'step_{num_inference_steps}_txt{i+1}_cfg{cfg_int}-{cfg_float}.png'
torchvision.utils.save_image(torchvision.utils.make_grid(samples, nrow = num_img), os.path.join("demo", save_name))
``` | {"license": "cc-by-nc-4.0"} | hansyan/perflow-sdxl-dreamshaper | null | [
"diffusers",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-04-24T13:00:48+00:00 | [] | [] | TAGS
#diffusers #license-cc-by-nc-4.0 #endpoints_compatible #has_space #diffusers-StableDiffusionXLPipeline #region-us
| Github repo: URL <br>
PeRFlow accelerated SDXL-DreamShaper: URL
Demo:
| [] | [
"TAGS\n#diffusers #license-cc-by-nc-4.0 #endpoints_compatible #has_space #diffusers-StableDiffusionXLPipeline #region-us \n"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** akbargherbal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | akbargherbal/think_tanks_v02_4bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-04-24T13:02:31+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: akbargherbal
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: akbargherbal\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: akbargherbal\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/nbeerbower/llama-3-slerp-kraut-dragon-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF/resolve/main/llama-3-slerp-kraut-dragon-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "nbeerbower/llama-3-slerp-kraut-dragon-8B", "license_name": "llama3", "quantized_by": "mradermacher"} | mradermacher/llama-3-slerp-kraut-dragon-8B-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nbeerbower/llama-3-slerp-kraut-dragon-8B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:02:36+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mergekit #merge #en #base_model-nbeerbower/llama-3-slerp-kraut-dragon-8B #license-other #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #mergekit #merge #en #base_model-nbeerbower/llama-3-slerp-kraut-dragon-8B #license-other #endpoints_compatible #region-us \n"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# experiments
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "vilsonrodrigues/falcon-7b-instruct-sharded", "model-index": [{"name": "experiments", "results": []}]} | Swathi0810/experiments | null | [
"tensorboard",
"generated_from_trainer",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T13:03:26+00:00 | [] | [] | TAGS
#tensorboard #generated_from_trainer #base_model-vilsonrodrigues/falcon-7b-instruct-sharded #license-apache-2.0 #region-us
|
# experiments
This model is a fine-tuned version of vilsonrodrigues/falcon-7b-instruct-sharded on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# experiments\n\nThis model is a fine-tuned version of vilsonrodrigues/falcon-7b-instruct-sharded on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.05\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#tensorboard #generated_from_trainer #base_model-vilsonrodrigues/falcon-7b-instruct-sharded #license-apache-2.0 #region-us \n",
"# experiments\n\nThis model is a fine-tuned version of vilsonrodrigues/falcon-7b-instruct-sharded on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.05\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | null |
# NeuralsynthesisStrangemerges_32-7B
NeuralsynthesisStrangemerges_32-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: Kukedlc/NeuralSynthesis-7b-v0.4-slerp
- model: Gille/StrangeMerges_32-7B-slerp
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/NeuralsynthesisStrangemerges_32-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/NeuralsynthesisStrangemerges_32-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T13:05:16+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# NeuralsynthesisStrangemerges_32-7B
NeuralsynthesisStrangemerges_32-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# NeuralsynthesisStrangemerges_32-7B\n\nNeuralsynthesisStrangemerges_32-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# NeuralsynthesisStrangemerges_32-7B\n\nNeuralsynthesisStrangemerges_32-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# race color - 0,
# socioeconomic - 1,
# gender - 2,
# disability - 3,
# nationality - 4,
# sexualorientation - 5,
# physical-appearance - 6,
# religion - 7,
# age - 8.
# Proffesion - 9.
# bias_identificaiton45
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.39.3
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "bias_identificaiton45", "results": []}]} | PriyaPatel/bias_identificaiton45 | null | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:05:27+00:00 | [] | [] | TAGS
#transformers #tf #roberta #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
|
# race color - 0,
# socioeconomic - 1,
# gender - 2,
# disability - 3,
# nationality - 4,
# sexualorientation - 5,
# physical-appearance - 6,
# religion - 7,
# age - 8.
# Proffesion - 9.
# bias_identificaiton45
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.39.3
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# race color - 0,",
"# socioeconomic - 1,",
"# gender - 2,",
"# disability - 3,",
"# nationality - 4,",
"# sexualorientation - 5,",
"# physical-appearance - 6,",
"# religion - 7,",
"# age - 8.",
"# Proffesion - 9.",
"# bias_identificaiton45\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- TensorFlow 2.15.0\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tf #roberta #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n",
"# race color - 0,",
"# socioeconomic - 1,",
"# gender - 2,",
"# disability - 3,",
"# nationality - 4,",
"# sexualorientation - 5,",
"# physical-appearance - 6,",
"# religion - 7,",
"# age - 8.",
"# Proffesion - 9.",
"# bias_identificaiton45\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- TensorFlow 2.15.0\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
image-to-3d | null |
Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting
This repository contains the checkpoint for our depth completion network that also powers the demo at https://huggingface.co/spaces/paulengstler/invisible-stitch/
Please consider https://github.com/paulengstler/invisible-stitch for the code release.
| {"tags": ["image-to-3d"]} | paulengstler/invisible-stitch | null | [
"image-to-3d",
"region:us",
"has_space"
] | null | 2024-04-24T13:09:03+00:00 | [] | [] | TAGS
#image-to-3d #region-us #has_space
|
Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting
This repository contains the checkpoint for our depth completion network that also powers the demo at URL
Please consider URL for the code release.
| [] | [
"TAGS\n#image-to-3d #region-us #has_space \n"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Azure99/blossom-v3_1-yi-34b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["Azure99/blossom-chat-v1", "Azure99/blossom-math-v2", "Azure99/blossom-wizard-v1", "Azure99/blossom-orca-v1"], "base_model": "Azure99/blossom-v3_1-yi-34b", "quantized_by": "mradermacher"} | mradermacher/blossom-v3_1-yi-34b-i1-GGUF | null | [
"transformers",
"gguf",
"en",
"dataset:Azure99/blossom-chat-v1",
"dataset:Azure99/blossom-math-v2",
"dataset:Azure99/blossom-wizard-v1",
"dataset:Azure99/blossom-orca-v1",
"base_model:Azure99/blossom-v3_1-yi-34b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:09:06+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #dataset-Azure99/blossom-chat-v1 #dataset-Azure99/blossom-math-v2 #dataset-Azure99/blossom-wizard-v1 #dataset-Azure99/blossom-orca-v1 #base_model-Azure99/blossom-v3_1-yi-34b #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #dataset-Azure99/blossom-chat-v1 #dataset-Azure99/blossom-math-v2 #dataset-Azure99/blossom-wizard-v1 #dataset-Azure99/blossom-orca-v1 #base_model-Azure99/blossom-v3_1-yi-34b #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | domenicrosati/lens-loss-minimality-l2_lr_2e-5_model_meta-llama_Llama-2-7b-chat-hf_batch_4_epoch_1_num_layers_6 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T13:10:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
*There currently is an issue with the **model generating random reserved special tokens (like "<|reserved_special_token_49|>") at the end**. Please use with `skip_special_tokens=true`. We will update once we found the reason for this behaviour. If you found a solution, please let us know!*
# Llama 3 DiscoLM German 8b v0.1 Experimental
<p align="center"><img src="disco_llama.webp" width="400"></p>
# Introduction
**Llama 3 DiscoLM German 8b v0.1 Experimental** is an experimental Llama 3 based version of [DiscoLM German](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1).
This is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.
Please find a online Demo [here](https://364b61f772fa7baacb.gradio.live/) (we may take this offline for updates).
# Prompt Format
DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
```
<|im_start|>system
Du bist ein hilfreicher Assistent.<|im_end|>
<|im_start|>user
Wer bist du?<|im_end|>
<|im_start|>assistant
Ich bin ein Sprachmodell namens DiscoLM German und ich wurde von DiscoResearch trainiert.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": "Wer bist du?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
# Example Code for Inference
```python
model_id = "DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": "Wer bist du?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
# Limitations & Biases
This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.
# License
This model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see [LICENSE](LICENSE) for more information.
# Acknowledgements
Built with Meta Llama 3.
DiscoLM German is a [DiscoResearch](https://huggingface.co/DiscoResearch) project, a collective effort by [JP Harries](https://huggingface.co/jphme), [Björn Plüster](https://huggingface.co/bjoernp) and [Daniel Auras](https://huggingface.co/rasdani).
Development of Llama 3 DiscoLM German 8b was sponsored by [ellamind](https://ellamind.com).
Compute was sponsored generously by [sysGen GmbH](https://www.sysgen.de/).
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# About DiscoResearch
DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our [Discord](https://discord.gg/ttNdas89f3), share your opinions and ideas, and advance open LLM research with us!
# Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place.
| {"library_name": "transformers", "tags": ["exl2"]} | mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-EXL2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"exl2",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"5-bit",
"region:us"
] | null | 2024-04-24T13:10:46+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #exl2 #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
|
*There currently is an issue with the model generating random reserved special tokens (like "<|reserved_special_token_49|>") at the end. Please use with 'skip_special_tokens=true'. We will update once we found the reason for this behaviour. If you found a solution, please let us know!*
# Llama 3 DiscoLM German 8b v0.1 Experimental
<p align="center"><img src="disco_llama.webp" width="400"></p>
# Introduction
Llama 3 DiscoLM German 8b v0.1 Experimental is an experimental Llama 3 based version of DiscoLM German.
This is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.
Please find a online Demo here (we may take this offline for updates).
# Prompt Format
DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This prompt is available as a chat template, which means you can format messages using the
'tokenizer.apply_chat_template()' method:
When tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\n' to your prompt, to ensure
that the model continues with an assistant response.
# Example Code for Inference
# Limitations & Biases
This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.
# License
This model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see LICENSE for more information.
# Acknowledgements
Built with Meta Llama 3.
DiscoLM German is a DiscoResearch project, a collective effort by JP Harries, Björn Plüster and Daniel Auras.
Development of Llama 3 DiscoLM German 8b was sponsored by ellamind.
Compute was sponsored generously by sysGen GmbH.
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
# About DiscoResearch
DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!
# Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place.
| [
"# Llama 3 DiscoLM German 8b v0.1 Experimental\n\n<p align=\"center\"><img src=\"disco_llama.webp\" width=\"400\"></p>",
"# Introduction\n\nLlama 3 DiscoLM German 8b v0.1 Experimental is an experimental Llama 3 based version of DiscoLM German.\n\nThis is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.\n\nPlease find a online Demo here (we may take this offline for updates).",
"# Prompt Format\n\nDiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.",
"# Example Code for Inference",
"# Limitations & Biases\n\nThis model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.\nThis model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.",
"# License\n\nThis model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see LICENSE for more information.",
"# Acknowledgements\n\nBuilt with Meta Llama 3.\n\nDiscoLM German is a DiscoResearch project, a collective effort by JP Harries, Björn Plüster and Daniel Auras.\n\nDevelopment of Llama 3 DiscoLM German 8b was sponsored by ellamind.\nCompute was sponsored generously by sysGen GmbH.\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>",
"# About DiscoResearch\n\nDiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!",
"# Disclaimer\n\nThe license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #exl2 #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n",
"# Llama 3 DiscoLM German 8b v0.1 Experimental\n\n<p align=\"center\"><img src=\"disco_llama.webp\" width=\"400\"></p>",
"# Introduction\n\nLlama 3 DiscoLM German 8b v0.1 Experimental is an experimental Llama 3 based version of DiscoLM German.\n\nThis is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.\n\nPlease find a online Demo here (we may take this offline for updates).",
"# Prompt Format\n\nDiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.",
"# Example Code for Inference",
"# Limitations & Biases\n\nThis model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.\nThis model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.",
"# License\n\nThis model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see LICENSE for more information.",
"# Acknowledgements\n\nBuilt with Meta Llama 3.\n\nDiscoLM German is a DiscoResearch project, a collective effort by JP Harries, Björn Plüster and Daniel Auras.\n\nDevelopment of Llama 3 DiscoLM German 8b was sponsored by ellamind.\nCompute was sponsored generously by sysGen GmbH.\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>",
"# About DiscoResearch\n\nDiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!",
"# Disclaimer\n\nThe license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tjl223/llama2-qlora-lyric-generator | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-04-24T13:13:03+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #has_space #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #has_space #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | AlienKevin/Meta-Llama-3-8B-tagllm-lang-1-reserved | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T13:13:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# 🔥 Classifiers of FinTOC 2022 Shared task winners (ISPRAS team) 🔥
Classifiers of texual lines of English, French and Spanish financial prospects in PDF format for the [FinTOC 2022 Shared task](https://wp.lancs.ac.uk/cfie/fintoc2022/).
## 🤗 Source code 🤗
Training scripts are available in the repository https://github.com/ispras/dedoc/ (see `scripts/fintoc2022` directory).
## 🤗 Task description 🤗
Lines are classified in two stages:
1. Binary classification title/not title (title detection task).
2. Classification of title lines into title depth classes (TOC generation task).
There are two types of classifiers according to the stage:
1. For the first stage, **binary classifiers** are trained. They return `bool` values: `True` for title lines and `False` for non-title lines.
2. For the second stage, **target classifiers** are trained. They return `int` title depth classes from 1 to 6. More important lines have a lesser depth.
## 🤗 Results evaluation 🤗
The training dataset contains English, French, and Spanish documents, so three language categories are available ("en", "fr", "sp").
To obtain document lines, we use [dedoc](https://dedoc.readthedocs.io) library (`dedoc.readers.PdfTabbyReader`, `dedoc.readers.PdfTxtlayerReader`), so two reader categories are available ("tabby", "txt_layer").
To obtain FinTOC structure, we use our method described in [our article](https://aclanthology.org/2022.fnp-1.13.pdf) (winners of FinTOC 2022 Shared task!).
The results of our method (3-fold cross-validation on the FinTOC 2022 training dataset) for different languages and readers are given in the table below (they slightly changed since the competition finished).
As in the FinTOC 2022 Shared task, we use two metrics for results evaluation (metrics from the [article](https://aclanthology.org/2022.fnp-1.12.pdf)):
**TD** - F1 measure for the title detection task, **TOC** - harmonic mean of Inex F1 score and Inex level accuracy for the TOC generation task.
<table border="1" class="dataframe">
<thead>
<tr style="text-align: left;">
<th></th>
<th>TD 0</th>
<th>TD 1</th>
<th>TD 2</th>
<th>TD mean</th>
<th>TOC 0</th>
<th>TOC 1</th>
<th>TOC 2</th>
<th>TOC mean</th>
</tr>
</thead>
<tbody>
<tr>
<th>en_tabby</th>
<td>0.811522</td>
<td>0.833798</td>
<td>0.864239</td>
<td>0.836520</td>
<td>56.5</td>
<td>58.0</td>
<td>64.9</td>
<td>59.800000</td>
</tr>
<tr>
<th>en_txt_layer</th>
<td>0.821360</td>
<td>0.853258</td>
<td>0.833623</td>
<td>0.836081</td>
<td>57.8</td>
<td>62.1</td>
<td>57.8</td>
<td>59.233333</td>
</tr>
<tr>
<th>fr_tabby</th>
<td>0.753409</td>
<td>0.744232</td>
<td>0.782169</td>
<td>0.759937</td>
<td>51.2</td>
<td>47.9</td>
<td>51.5</td>
<td>50.200000</td>
</tr>
<tr>
<th>fr_txt_layer</th>
<td>0.740530</td>
<td>0.794460</td>
<td>0.766059</td>
<td>0.767016</td>
<td>45.6</td>
<td>52.2</td>
<td>50.1</td>
<td>49.300000</td>
</tr>
<tr>
<th>sp_tabby</th>
<td>0.606718</td>
<td>0.622839</td>
<td>0.599094</td>
<td>0.609550</td>
<td>37.1</td>
<td>43.6</td>
<td>43.4</td>
<td>41.366667</td>
</tr>
<tr>
<th>sp_txt_layer</th>
<td>0.629052</td>
<td>0.667976</td>
<td>0.446827</td>
<td>0.581285</td>
<td>46.4</td>
<td>48.8</td>
<td>30.7</td>
<td>41.966667</td>
</tr>
</tbody>
</table>
## 🤗 See also 🤗
Please see our article [ISPRAS@FinTOC-2022 shared task: Two-stage TOC generation model](https://aclanthology.org/2022.fnp-1.13.pdf)
to get more information about the FinTOC 2022 Shared task and our method of solving it.
We will be grateful, if you cite our work (see citation in BibTeX format below).
```
@inproceedings{bogatenkova-etal-2022-ispras,
title = "{ISPRAS}@{F}in{TOC}-2022 Shared Task: Two-stage {TOC} Generation Model",
author = "Bogatenkova, Anastasiia and
Belyaeva, Oksana Vladimirovna and
Perminov, Andrew Igorevich and
Kozlov, Ilya Sergeevich",
editor = "El-Haj, Mahmoud and
Rayson, Paul and
Zmandar, Nadhem",
booktitle = "Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.fnp-1.13",
pages = "89--94"
}
``` | {"language": ["en", "fr", "es"], "license": "mit"} | dedoc/fintoc_classifiers | null | [
"en",
"fr",
"es",
"license:mit",
"region:us"
] | null | 2024-04-24T13:13:29+00:00 | [] | [
"en",
"fr",
"es"
] | TAGS
#en #fr #es #license-mit #region-us
|
# Classifiers of FinTOC 2022 Shared task winners (ISPRAS team)
Classifiers of texual lines of English, French and Spanish financial prospects in PDF format for the FinTOC 2022 Shared task.
## Source code
Training scripts are available in the repository URL (see 'scripts/fintoc2022' directory).
## Task description
Lines are classified in two stages:
1. Binary classification title/not title (title detection task).
2. Classification of title lines into title depth classes (TOC generation task).
There are two types of classifiers according to the stage:
1. For the first stage, binary classifiers are trained. They return 'bool' values: 'True' for title lines and 'False' for non-title lines.
2. For the second stage, target classifiers are trained. They return 'int' title depth classes from 1 to 6. More important lines have a lesser depth.
## Results evaluation
The training dataset contains English, French, and Spanish documents, so three language categories are available ("en", "fr", "sp").
To obtain document lines, we use dedoc library ('dedoc.readers.PdfTabbyReader', 'dedoc.readers.PdfTxtlayerReader'), so two reader categories are available ("tabby", "txt_layer").
To obtain FinTOC structure, we use our method described in our article (winners of FinTOC 2022 Shared task!).
The results of our method (3-fold cross-validation on the FinTOC 2022 training dataset) for different languages and readers are given in the table below (they slightly changed since the competition finished).
As in the FinTOC 2022 Shared task, we use two metrics for results evaluation (metrics from the article):
TD - F1 measure for the title detection task, TOC - harmonic mean of Inex F1 score and Inex level accuracy for the TOC generation task.
<table border="1" class="dataframe">
<thead>
<tr style="text-align: left;">
<th></th>
<th>TD 0</th>
<th>TD 1</th>
<th>TD 2</th>
<th>TD mean</th>
<th>TOC 0</th>
<th>TOC 1</th>
<th>TOC 2</th>
<th>TOC mean</th>
</tr>
</thead>
<tbody>
<tr>
<th>en_tabby</th>
<td>0.811522</td>
<td>0.833798</td>
<td>0.864239</td>
<td>0.836520</td>
<td>56.5</td>
<td>58.0</td>
<td>64.9</td>
<td>59.800000</td>
</tr>
<tr>
<th>en_txt_layer</th>
<td>0.821360</td>
<td>0.853258</td>
<td>0.833623</td>
<td>0.836081</td>
<td>57.8</td>
<td>62.1</td>
<td>57.8</td>
<td>59.233333</td>
</tr>
<tr>
<th>fr_tabby</th>
<td>0.753409</td>
<td>0.744232</td>
<td>0.782169</td>
<td>0.759937</td>
<td>51.2</td>
<td>47.9</td>
<td>51.5</td>
<td>50.200000</td>
</tr>
<tr>
<th>fr_txt_layer</th>
<td>0.740530</td>
<td>0.794460</td>
<td>0.766059</td>
<td>0.767016</td>
<td>45.6</td>
<td>52.2</td>
<td>50.1</td>
<td>49.300000</td>
</tr>
<tr>
<th>sp_tabby</th>
<td>0.606718</td>
<td>0.622839</td>
<td>0.599094</td>
<td>0.609550</td>
<td>37.1</td>
<td>43.6</td>
<td>43.4</td>
<td>41.366667</td>
</tr>
<tr>
<th>sp_txt_layer</th>
<td>0.629052</td>
<td>0.667976</td>
<td>0.446827</td>
<td>0.581285</td>
<td>46.4</td>
<td>48.8</td>
<td>30.7</td>
<td>41.966667</td>
</tr>
</tbody>
</table>
## See also
Please see our article ISPRAS@FinTOC-2022 shared task: Two-stage TOC generation model
to get more information about the FinTOC 2022 Shared task and our method of solving it.
We will be grateful, if you cite our work (see citation in BibTeX format below).
| [
"# Classifiers of FinTOC 2022 Shared task winners (ISPRAS team) \n\nClassifiers of texual lines of English, French and Spanish financial prospects in PDF format for the FinTOC 2022 Shared task.",
"## Source code \n\nTraining scripts are available in the repository URL (see 'scripts/fintoc2022' directory).",
"## Task description \n\nLines are classified in two stages:\n1. Binary classification title/not title (title detection task).\n2. Classification of title lines into title depth classes (TOC generation task).\n\nThere are two types of classifiers according to the stage:\n1. For the first stage, binary classifiers are trained. They return 'bool' values: 'True' for title lines and 'False' for non-title lines.\n2. For the second stage, target classifiers are trained. They return 'int' title depth classes from 1 to 6. More important lines have a lesser depth.",
"## Results evaluation \n\nThe training dataset contains English, French, and Spanish documents, so three language categories are available (\"en\", \"fr\", \"sp\").\nTo obtain document lines, we use dedoc library ('dedoc.readers.PdfTabbyReader', 'dedoc.readers.PdfTxtlayerReader'), so two reader categories are available (\"tabby\", \"txt_layer\").\n\nTo obtain FinTOC structure, we use our method described in our article (winners of FinTOC 2022 Shared task!).\nThe results of our method (3-fold cross-validation on the FinTOC 2022 training dataset) for different languages and readers are given in the table below (they slightly changed since the competition finished).\nAs in the FinTOC 2022 Shared task, we use two metrics for results evaluation (metrics from the article):\nTD - F1 measure for the title detection task, TOC - harmonic mean of Inex F1 score and Inex level accuracy for the TOC generation task.\n\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: left;\">\n <th></th>\n <th>TD 0</th>\n <th>TD 1</th>\n <th>TD 2</th>\n <th>TD mean</th>\n <th>TOC 0</th>\n <th>TOC 1</th>\n <th>TOC 2</th>\n <th>TOC mean</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>en_tabby</th>\n <td>0.811522</td>\n <td>0.833798</td>\n <td>0.864239</td>\n <td>0.836520</td>\n <td>56.5</td>\n <td>58.0</td>\n <td>64.9</td>\n <td>59.800000</td>\n </tr>\n <tr>\n <th>en_txt_layer</th>\n <td>0.821360</td>\n <td>0.853258</td>\n <td>0.833623</td>\n <td>0.836081</td>\n <td>57.8</td>\n <td>62.1</td>\n <td>57.8</td>\n <td>59.233333</td>\n </tr>\n <tr>\n <th>fr_tabby</th>\n <td>0.753409</td>\n <td>0.744232</td>\n <td>0.782169</td>\n <td>0.759937</td>\n <td>51.2</td>\n <td>47.9</td>\n <td>51.5</td>\n <td>50.200000</td>\n </tr>\n <tr>\n <th>fr_txt_layer</th>\n <td>0.740530</td>\n <td>0.794460</td>\n <td>0.766059</td>\n <td>0.767016</td>\n <td>45.6</td>\n <td>52.2</td>\n <td>50.1</td>\n <td>49.300000</td>\n </tr>\n <tr>\n <th>sp_tabby</th>\n <td>0.606718</td>\n <td>0.622839</td>\n <td>0.599094</td>\n <td>0.609550</td>\n <td>37.1</td>\n <td>43.6</td>\n <td>43.4</td>\n <td>41.366667</td>\n </tr>\n <tr>\n <th>sp_txt_layer</th>\n <td>0.629052</td>\n <td>0.667976</td>\n <td>0.446827</td>\n <td>0.581285</td>\n <td>46.4</td>\n <td>48.8</td>\n <td>30.7</td>\n <td>41.966667</td>\n </tr>\n </tbody>\n</table>",
"## See also \n\nPlease see our article ISPRAS@FinTOC-2022 shared task: Two-stage TOC generation model\nto get more information about the FinTOC 2022 Shared task and our method of solving it.\nWe will be grateful, if you cite our work (see citation in BibTeX format below)."
] | [
"TAGS\n#en #fr #es #license-mit #region-us \n",
"# Classifiers of FinTOC 2022 Shared task winners (ISPRAS team) \n\nClassifiers of texual lines of English, French and Spanish financial prospects in PDF format for the FinTOC 2022 Shared task.",
"## Source code \n\nTraining scripts are available in the repository URL (see 'scripts/fintoc2022' directory).",
"## Task description \n\nLines are classified in two stages:\n1. Binary classification title/not title (title detection task).\n2. Classification of title lines into title depth classes (TOC generation task).\n\nThere are two types of classifiers according to the stage:\n1. For the first stage, binary classifiers are trained. They return 'bool' values: 'True' for title lines and 'False' for non-title lines.\n2. For the second stage, target classifiers are trained. They return 'int' title depth classes from 1 to 6. More important lines have a lesser depth.",
"## Results evaluation \n\nThe training dataset contains English, French, and Spanish documents, so three language categories are available (\"en\", \"fr\", \"sp\").\nTo obtain document lines, we use dedoc library ('dedoc.readers.PdfTabbyReader', 'dedoc.readers.PdfTxtlayerReader'), so two reader categories are available (\"tabby\", \"txt_layer\").\n\nTo obtain FinTOC structure, we use our method described in our article (winners of FinTOC 2022 Shared task!).\nThe results of our method (3-fold cross-validation on the FinTOC 2022 training dataset) for different languages and readers are given in the table below (they slightly changed since the competition finished).\nAs in the FinTOC 2022 Shared task, we use two metrics for results evaluation (metrics from the article):\nTD - F1 measure for the title detection task, TOC - harmonic mean of Inex F1 score and Inex level accuracy for the TOC generation task.\n\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: left;\">\n <th></th>\n <th>TD 0</th>\n <th>TD 1</th>\n <th>TD 2</th>\n <th>TD mean</th>\n <th>TOC 0</th>\n <th>TOC 1</th>\n <th>TOC 2</th>\n <th>TOC mean</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>en_tabby</th>\n <td>0.811522</td>\n <td>0.833798</td>\n <td>0.864239</td>\n <td>0.836520</td>\n <td>56.5</td>\n <td>58.0</td>\n <td>64.9</td>\n <td>59.800000</td>\n </tr>\n <tr>\n <th>en_txt_layer</th>\n <td>0.821360</td>\n <td>0.853258</td>\n <td>0.833623</td>\n <td>0.836081</td>\n <td>57.8</td>\n <td>62.1</td>\n <td>57.8</td>\n <td>59.233333</td>\n </tr>\n <tr>\n <th>fr_tabby</th>\n <td>0.753409</td>\n <td>0.744232</td>\n <td>0.782169</td>\n <td>0.759937</td>\n <td>51.2</td>\n <td>47.9</td>\n <td>51.5</td>\n <td>50.200000</td>\n </tr>\n <tr>\n <th>fr_txt_layer</th>\n <td>0.740530</td>\n <td>0.794460</td>\n <td>0.766059</td>\n <td>0.767016</td>\n <td>45.6</td>\n <td>52.2</td>\n <td>50.1</td>\n <td>49.300000</td>\n </tr>\n <tr>\n <th>sp_tabby</th>\n <td>0.606718</td>\n <td>0.622839</td>\n <td>0.599094</td>\n <td>0.609550</td>\n <td>37.1</td>\n <td>43.6</td>\n <td>43.4</td>\n <td>41.366667</td>\n </tr>\n <tr>\n <th>sp_txt_layer</th>\n <td>0.629052</td>\n <td>0.667976</td>\n <td>0.446827</td>\n <td>0.581285</td>\n <td>46.4</td>\n <td>48.8</td>\n <td>30.7</td>\n <td>41.966667</td>\n </tr>\n </tbody>\n</table>",
"## See also \n\nPlease see our article ISPRAS@FinTOC-2022 shared task: Two-stage TOC generation model\nto get more information about the FinTOC 2022 Shared task and our method of solving it.\nWe will be grateful, if you cite our work (see citation in BibTeX format below)."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | berquetR/phi_first_train | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-24T13:13:43+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7867
- Accuracy: 0.9203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.3049 | 1.0 | 318 | 3.2936 | 0.7268 |
| 2.6445 | 2.0 | 636 | 1.8843 | 0.8535 |
| 1.5643 | 3.0 | 954 | 1.1692 | 0.8916 |
| 1.028 | 4.0 | 1272 | 0.8712 | 0.9145 |
| 0.8138 | 5.0 | 1590 | 0.7867 | 0.9203 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu118
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": []}]} | taoyoung/distilbert-base-uncased-finetuned-clinc | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:14:11+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-clinc
=======================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7867
* Accuracy: 0.9203
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 48
* eval\_batch\_size: 48
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.2+cu118
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu118\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu118\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** VinhLlama
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | VinhLlama/Gemma7bVinhntV04_16bit | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:15:07+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: VinhLlama
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: VinhLlama\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: VinhLlama\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** bharathirajan89
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | bharathirajan89/bharathi_mistral_7b_pulse_unsloth_v2_merged | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:15:30+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: bharathirajan89
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: bharathirajan89\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: bharathirajan89\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_base_finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the English subset of pii200k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1096
- Overall Precision: 0.8992
- Overall Recall: 0.9251
- Overall F1: 0.9120
- Overall Accuracy: 0.9546
- Accountname F1: 0.9861
- Accountnumber F1: 0.9809
- Age F1: 0.9202
- Amount F1: 0.9408
- Bic F1: 0.8869
- Bitcoinaddress F1: 0.9502
- Buildingnumber F1: 0.8860
- City F1: 0.9207
- Companyname F1: 0.9693
- County F1: 0.9725
- Creditcardcvv F1: 0.9107
- Creditcardissuer F1: 0.9872
- Creditcardnumber F1: 0.8675
- Currency F1: 0.7147
- Currencycode F1: 0.6585
- Currencyname F1: 0.0123
- Currencysymbol F1: 0.8368
- Date F1: 0.8193
- Dob F1: 0.5701
- Email F1: 0.9953
- Ethereumaddress F1: 0.9877
- Eyecolor F1: 0.9302
- Firstname F1: 0.9602
- Gender F1: 0.9568
- Height F1: 0.9695
- Iban F1: 0.9751
- Ip F1: 0.0
- Ipv4 F1: 0.8265
- Ipv6 F1: 0.7527
- Jobarea F1: 0.9133
- Jobtitle F1: 0.9728
- Jobtype F1: 0.9297
- Lastname F1: 0.9333
- Litecoinaddress F1: 0.8225
- Mac F1: 0.9957
- Maskednumber F1: 0.8108
- Middlename F1: 0.9247
- Nearbygpscoordinate F1: 1.0
- Ordinaldirection F1: 0.9533
- Password F1: 0.9174
- Phoneimei F1: 0.9862
- Phonenumber F1: 0.9759
- Pin F1: 0.8829
- Prefix F1: 0.9340
- Secondaryaddress F1: 0.9829
- Sex F1: 0.9791
- Ssn F1: 0.9703
- State F1: 0.9521
- Street F1: 0.9349
- Time F1: 0.9816
- Url F1: 0.9982
- Useragent F1: 0.9813
- Username F1: 0.9743
- Vehiclevin F1: 0.9712
- Vehiclevrm F1: 0.9526
- Zipcode F1: 0.8184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Accountname F1 | Accountnumber F1 | Age F1 | Amount F1 | Bic F1 | Bitcoinaddress F1 | Buildingnumber F1 | City F1 | Companyname F1 | County F1 | Creditcardcvv F1 | Creditcardissuer F1 | Creditcardnumber F1 | Currency F1 | Currencycode F1 | Currencyname F1 | Currencysymbol F1 | Date F1 | Dob F1 | Email F1 | Ethereumaddress F1 | Eyecolor F1 | Firstname F1 | Gender F1 | Height F1 | Iban F1 | Ip F1 | Ipv4 F1 | Ipv6 F1 | Jobarea F1 | Jobtitle F1 | Jobtype F1 | Lastname F1 | Litecoinaddress F1 | Mac F1 | Maskednumber F1 | Middlename F1 | Nearbygpscoordinate F1 | Ordinaldirection F1 | Password F1 | Phoneimei F1 | Phonenumber F1 | Pin F1 | Prefix F1 | Secondaryaddress F1 | Sex F1 | Ssn F1 | State F1 | Street F1 | Time F1 | Url F1 | Useragent F1 | Username F1 | Vehiclevin F1 | Vehiclevrm F1 | Zipcode F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:--------------:|:----------------:|:------:|:---------:|:------:|:-----------------:|:-----------------:|:-------:|:--------------:|:---------:|:----------------:|:-------------------:|:-------------------:|:-----------:|:---------------:|:---------------:|:-----------------:|:-------:|:------:|:--------:|:------------------:|:-----------:|:------------:|:---------:|:---------:|:-------:|:------:|:-------:|:-------:|:----------:|:-----------:|:----------:|:-----------:|:------------------:|:------:|:---------------:|:-------------:|:----------------------:|:-------------------:|:-----------:|:------------:|:--------------:|:------:|:---------:|:-------------------:|:------:|:------:|:--------:|:---------:|:-------:|:------:|:------------:|:-----------:|:-------------:|:-------------:|:----------:|
| 0.4764 | 1.0 | 1088 | 0.2240 | 0.6718 | 0.7532 | 0.7102 | 0.9283 | 0.8807 | 0.9560 | 0.7916 | 0.6034 | 0.4684 | 0.8385 | 0.6515 | 0.6041 | 0.8988 | 0.6165 | 0.2137 | 0.7101 | 0.6661 | 0.3774 | 0.0 | 0.0 | 0.4411 | 0.7095 | 0.1332 | 0.9859 | 0.9712 | 0.4963 | 0.8349 | 0.6953 | 0.8675 | 0.9045 | 0.0018 | 0.0484 | 0.7792 | 0.5532 | 0.7598 | 0.6803 | 0.7476 | 0.4354 | 0.9806 | 0.5663 | 0.1526 | 0.9985 | 0.8345 | 0.7584 | 0.9741 | 0.9326 | 0.1657 | 0.9104 | 0.8907 | 0.8920 | 0.8820 | 0.4878 | 0.6348 | 0.9580 | 0.9759 | 0.9398 | 0.9054 | 0.7335 | 0.5931 | 0.5893 |
| 0.1476 | 2.0 | 2176 | 0.1248 | 0.8445 | 0.9023 | 0.8725 | 0.9494 | 0.9653 | 0.9700 | 0.9177 | 0.9124 | 0.9003 | 0.9273 | 0.8761 | 0.9196 | 0.9694 | 0.9537 | 0.8958 | 0.9825 | 0.8528 | 0.6293 | 0.4828 | 0.0 | 0.7793 | 0.8291 | 0.5297 | 0.9882 | 0.9758 | 0.9064 | 0.9353 | 0.9426 | 0.9759 | 0.9313 | 0.0288 | 0.6916 | 0.4490 | 0.8870 | 0.9542 | 0.9176 | 0.8924 | 0.7650 | 0.9871 | 0.6870 | 0.8530 | 1.0 | 0.9469 | 0.9526 | 0.9890 | 0.9447 | 0.8103 | 0.9261 | 0.9694 | 0.9684 | 0.9611 | 0.9417 | 0.8784 | 0.9660 | 0.9973 | 0.9657 | 0.9639 | 0.9744 | 0.9617 | 0.8035 |
| 0.0959 | 3.0 | 3264 | 0.1096 | 0.8992 | 0.9251 | 0.9120 | 0.9546 | 0.9861 | 0.9809 | 0.9202 | 0.9408 | 0.8869 | 0.9502 | 0.8860 | 0.9207 | 0.9693 | 0.9725 | 0.9107 | 0.9872 | 0.8675 | 0.7147 | 0.6585 | 0.0123 | 0.8368 | 0.8193 | 0.5701 | 0.9953 | 0.9877 | 0.9302 | 0.9602 | 0.9568 | 0.9695 | 0.9751 | 0.0 | 0.8265 | 0.7527 | 0.9133 | 0.9728 | 0.9297 | 0.9333 | 0.8225 | 0.9957 | 0.8108 | 0.9247 | 1.0 | 0.9533 | 0.9174 | 0.9862 | 0.9759 | 0.8829 | 0.9340 | 0.9829 | 0.9791 | 0.9703 | 0.9521 | 0.9349 | 0.9816 | 0.9982 | 0.9813 | 0.9743 | 0.9712 | 0.9526 | 0.8184 |
| 0.0793 | 4.0 | 4352 | 0.1166 | 0.8968 | 0.9294 | 0.9128 | 0.9555 | 0.9816 | 0.9853 | 0.9256 | 0.9514 | 0.9206 | 0.8850 | 0.9081 | 0.9223 | 0.9722 | 0.9769 | 0.9107 | 0.9952 | 0.8934 | 0.7098 | 0.7304 | 0.1316 | 0.8543 | 0.7954 | 0.6306 | 0.9953 | 0.9789 | 0.9388 | 0.9600 | 0.9645 | 0.9863 | 0.9559 | 0.0707 | 0.7875 | 0.7765 | 0.9058 | 0.9721 | 0.9291 | 0.9426 | 0.7036 | 0.9744 | 0.8076 | 0.9394 | 1.0 | 0.9651 | 0.9392 | 0.9903 | 0.9805 | 0.8970 | 0.9352 | 0.9841 | 0.9751 | 0.9795 | 0.9718 | 0.9129 | 0.9772 | 0.9955 | 0.9780 | 0.9793 | 0.9329 | 0.9753 | 0.8933 |
| 0.0625 | 5.0 | 5440 | 0.1284 | 0.9022 | 0.9339 | 0.9178 | 0.9573 | 0.9889 | 0.9817 | 0.9278 | 0.9650 | 0.9427 | 0.9145 | 0.9143 | 0.9510 | 0.9760 | 0.9826 | 0.9432 | 0.9936 | 0.8812 | 0.6920 | 0.7529 | 0.3642 | 0.8702 | 0.8235 | 0.6588 | 0.9982 | 0.9877 | 0.9408 | 0.9693 | 0.9723 | 0.9931 | 0.9761 | 0.2130 | 0.7683 | 0.7055 | 0.9149 | 0.9801 | 0.9394 | 0.9389 | 0.7842 | 0.9787 | 0.8047 | 0.9388 | 1.0 | 0.9710 | 0.9698 | 0.9890 | 0.9815 | 0.9329 | 0.9351 | 0.9861 | 0.9772 | 0.9744 | 0.9713 | 0.9361 | 0.9735 | 1.0 | 0.9823 | 0.9883 | 0.9744 | 0.9756 | 0.8794 |
| 0.0402 | 6.0 | 6528 | 0.1608 | 0.9100 | 0.9334 | 0.9216 | 0.9578 | 0.9926 | 0.9835 | 0.9295 | 0.9634 | 0.9091 | 0.9405 | 0.9081 | 0.9517 | 0.9788 | 0.9806 | 0.9419 | 0.9904 | 0.8960 | 0.7107 | 0.7635 | 0.3600 | 0.8756 | 0.8438 | 0.6620 | 0.9982 | 0.9877 | 0.9464 | 0.9667 | 0.9722 | 0.9931 | 0.9704 | 0.2265 | 0.7973 | 0.7070 | 0.9187 | 0.9777 | 0.9392 | 0.9476 | 0.8412 | 0.9892 | 0.8187 | 0.9368 | 1.0 | 0.9710 | 0.9581 | 0.9890 | 0.9826 | 0.9231 | 0.9195 | 0.9872 | 0.9800 | 0.9806 | 0.9669 | 0.9398 | 0.9744 | 1.0 | 0.9779 | 0.9875 | 0.9712 | 0.9622 | 0.8785 |
| 0.0211 | 7.0 | 7616 | 0.1862 | 0.9040 | 0.9354 | 0.9194 | 0.9567 | 0.9907 | 0.9872 | 0.9297 | 0.9664 | 0.9524 | 0.9489 | 0.9135 | 0.9535 | 0.9836 | 0.9816 | 0.9507 | 0.9920 | 0.8856 | 0.6804 | 0.7692 | 0.3585 | 0.8763 | 0.8366 | 0.6809 | 0.9982 | 0.9877 | 0.9524 | 0.9708 | 0.9679 | 0.9897 | 0.9797 | 0.2845 | 0.7481 | 0.6489 | 0.9235 | 0.9794 | 0.9367 | 0.9480 | 0.8338 | 0.9787 | 0.8172 | 0.9422 | 1.0 | 0.9711 | 0.9699 | 0.9903 | 0.9836 | 0.9193 | 0.9368 | 0.9872 | 0.9820 | 0.9775 | 0.9726 | 0.9389 | 0.9789 | 1.0 | 0.9790 | 0.9899 | 0.9935 | 0.9756 | 0.8908 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert_base_finetuned", "results": []}]} | burkelive/distilbert_base_finetuned | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:16:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert\_base\_finetuned
===========================
This model is a fine-tuned version of distilbert-base-uncased on the English subset of pii200k dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1096
* Overall Precision: 0.8992
* Overall Recall: 0.9251
* Overall F1: 0.9120
* Overall Accuracy: 0.9546
* Accountname F1: 0.9861
* Accountnumber F1: 0.9809
* Age F1: 0.9202
* Amount F1: 0.9408
* Bic F1: 0.8869
* Bitcoinaddress F1: 0.9502
* Buildingnumber F1: 0.8860
* City F1: 0.9207
* Companyname F1: 0.9693
* County F1: 0.9725
* Creditcardcvv F1: 0.9107
* Creditcardissuer F1: 0.9872
* Creditcardnumber F1: 0.8675
* Currency F1: 0.7147
* Currencycode F1: 0.6585
* Currencyname F1: 0.0123
* Currencysymbol F1: 0.8368
* Date F1: 0.8193
* Dob F1: 0.5701
* Email F1: 0.9953
* Ethereumaddress F1: 0.9877
* Eyecolor F1: 0.9302
* Firstname F1: 0.9602
* Gender F1: 0.9568
* Height F1: 0.9695
* Iban F1: 0.9751
* Ip F1: 0.0
* Ipv4 F1: 0.8265
* Ipv6 F1: 0.7527
* Jobarea F1: 0.9133
* Jobtitle F1: 0.9728
* Jobtype F1: 0.9297
* Lastname F1: 0.9333
* Litecoinaddress F1: 0.8225
* Mac F1: 0.9957
* Maskednumber F1: 0.8108
* Middlename F1: 0.9247
* Nearbygpscoordinate F1: 1.0
* Ordinaldirection F1: 0.9533
* Password F1: 0.9174
* Phoneimei F1: 0.9862
* Phonenumber F1: 0.9759
* Pin F1: 0.8829
* Prefix F1: 0.9340
* Secondaryaddress F1: 0.9829
* Sex F1: 0.9791
* Ssn F1: 0.9703
* State F1: 0.9521
* Street F1: 0.9349
* Time F1: 0.9816
* Url F1: 0.9982
* Useragent F1: 0.9813
* Username F1: 0.9743
* Vehiclevin F1: 0.9712
* Vehiclevrm F1: 0.9526
* Zipcode F1: 0.8184
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.2
* num\_epochs: 7
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.2\n* num\\_epochs: 7",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.2\n* num\\_epochs: 7",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kangXn/enta-st-mde | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:17:05+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #deberta-v2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | siddharth797/gemma-1.1-2B-Finetune | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T13:17:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-to-text | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0a0+81ea7a4
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "pipeline_tag": "image-to-text", "model-index": [{"name": "donut-base-sroie", "results": []}]} | jaydip-tss/donut-base-sroie | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"image-to-text",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:17:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #image-to-text #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
|
# donut-base-sroie
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0a0+81ea7a4
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# donut-base-sroie\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 2.2.0a0+81ea7a4\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #image-to-text #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"# donut-base-sroie\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 2.2.0a0+81ea7a4\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/WesPro/PsyKidelic_Llama3_LimaRP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PsyKidelic_Llama3_LimaRP-GGUF/resolve/main/PsyKidelic_Llama3_LimaRP.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "WesPro/PsyKidelic_Llama3_LimaRP", "quantized_by": "mradermacher"} | mradermacher/PsyKidelic_Llama3_LimaRP-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:WesPro/PsyKidelic_Llama3_LimaRP",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:18:33+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mergekit #merge #en #base_model-WesPro/PsyKidelic_Llama3_LimaRP #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #mergekit #merge #en #base_model-WesPro/PsyKidelic_Llama3_LimaRP #endpoints_compatible #region-us \n"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-aadhaar-red
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0287
- Adhaar Number: {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39}
- Ame: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23}
- Ather Name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2}
- Ather Name Back: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19}
- Ather Name Front Top: {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11}
- Ddress Back: {'precision': 0.9512195121951219, 'recall': 0.9629629629629629, 'f1': 0.9570552147239264, 'number': 81}
- Ddress Front: {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52}
- Ender: {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21}
- Ob: {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21}
- Obile Number: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10}
- Ther: {'precision': 0.958974358974359, 'recall': 0.9689119170984456, 'f1': 0.9639175257731959, 'number': 193}
- Overall Precision: 0.9623
- Overall Recall: 0.9725
- Overall F1: 0.9673
- Overall Accuracy: 0.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Adhaar Number | Ame | Ather Name | Ather Name Back | Ather Name Front Top | Ddress Back | Ddress Front | Ender | Ob | Obile Number | Ther | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.1651 | 10.0 | 200 | 0.0226 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 39} | {'precision': 0.9130434782608695, 'recall': 0.9130434782608695, 'f1': 0.9130434782608695, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.926829268292683, 'recall': 0.9382716049382716, 'f1': 0.9325153374233128, 'number': 81} | {'precision': 0.9811320754716981, 'recall': 1.0, 'f1': 0.9904761904761905, 'number': 52} | {'precision': 0.9047619047619048, 'recall': 0.9047619047619048, 'f1': 0.9047619047619048, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9384615384615385, 'recall': 0.9481865284974094, 'f1': 0.9432989690721649, 'number': 193} | 0.9497 | 0.9597 | 0.9547 | 0.9962 |
| 0.004 | 20.0 | 400 | 0.0270 | {'precision': 0.9487179487179487, 'recall': 0.9487179487179487, 'f1': 0.9487179487179487, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.926829268292683, 'recall': 0.9382716049382716, 'f1': 0.9325153374233128, 'number': 81} | {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9090909090909091, 'recall': 0.9523809523809523, 'f1': 0.9302325581395349, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9333333333333333, 'recall': 0.9430051813471503, 'f1': 0.9381443298969072, 'number': 193} | 0.9454 | 0.9534 | 0.9494 | 0.9964 |
| 0.0016 | 30.0 | 600 | 0.0321 | {'precision': 0.925, 'recall': 0.9487179487179487, 'f1': 0.9367088607594937, 'number': 39} | {'precision': 0.9565217391304348, 'recall': 0.9565217391304348, 'f1': 0.9565217391304348, 'number': 23} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9146341463414634, 'recall': 0.9259259259259259, 'f1': 0.9202453987730062, 'number': 81} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9282051282051282, 'recall': 0.9378238341968912, 'f1': 0.9329896907216495, 'number': 193} | 0.9414 | 0.9534 | 0.9474 | 0.9959 |
| 0.0013 | 40.0 | 800 | 0.0243 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9390243902439024, 'recall': 0.9506172839506173, 'f1': 0.9447852760736196, 'number': 81} | {'precision': 0.9803921568627451, 'recall': 0.9615384615384616, 'f1': 0.970873786407767, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9487179487179487, 'recall': 0.9585492227979274, 'f1': 0.9536082474226804, 'number': 193} | 0.96 | 0.9661 | 0.9630 | 0.9973 |
| 0.0006 | 50.0 | 1000 | 0.0400 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 0.8947368421052632, 'f1': 0.9444444444444444, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.8902439024390244, 'recall': 0.9012345679012346, 'f1': 0.8957055214723927, 'number': 81} | {'precision': 0.9803921568627451, 'recall': 0.9615384615384616, 'f1': 0.970873786407767, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9384615384615385, 'recall': 0.9481865284974094, 'f1': 0.9432989690721649, 'number': 193} | 0.9471 | 0.9492 | 0.9481 | 0.9951 |
| 0.0003 | 60.0 | 1200 | 0.0323 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 0.9565217391304348, 'recall': 0.9565217391304348, 'f1': 0.9565217391304348, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.926829268292683, 'recall': 0.9382716049382716, 'f1': 0.9325153374233128, 'number': 81} | {'precision': 0.9423076923076923, 'recall': 0.9423076923076923, 'f1': 0.9423076923076923, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9384615384615385, 'recall': 0.9481865284974094, 'f1': 0.9432989690721649, 'number': 193} | 0.9455 | 0.9555 | 0.9505 | 0.9964 |
| 0.0005 | 70.0 | 1400 | 0.0287 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.9512195121951219, 'recall': 0.9629629629629629, 'f1': 0.9570552147239264, 'number': 81} | {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.958974358974359, 'recall': 0.9689119170984456, 'f1': 0.9639175257731959, 'number': 193} | 0.9623 | 0.9725 | 0.9673 | 0.9973 |
| 0.0004 | 80.0 | 1600 | 0.0417 | {'precision': 0.9487179487179487, 'recall': 0.9487179487179487, 'f1': 0.9487179487179487, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.9036144578313253, 'recall': 0.9259259259259259, 'f1': 0.9146341463414634, 'number': 81} | {'precision': 0.9607843137254902, 'recall': 0.9423076923076923, 'f1': 0.9514563106796117, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9285714285714286, 'recall': 0.9430051813471503, 'f1': 0.9357326478149101, 'number': 193} | 0.9393 | 0.9513 | 0.9453 | 0.9951 |
| 0.0001 | 90.0 | 1800 | 0.0362 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9146341463414634, 'recall': 0.9259259259259259, 'f1': 0.9202453987730062, 'number': 81} | {'precision': 0.9803921568627451, 'recall': 0.9615384615384616, 'f1': 0.970873786407767, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9384615384615385, 'recall': 0.9481865284974094, 'f1': 0.9432989690721649, 'number': 193} | 0.9516 | 0.9576 | 0.9546 | 0.9964 |
| 0.0001 | 100.0 | 2000 | 0.0378 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9146341463414634, 'recall': 0.9259259259259259, 'f1': 0.9202453987730062, 'number': 81} | {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9336734693877551, 'recall': 0.9481865284974094, 'f1': 0.9408740359897172, 'number': 193} | 0.9476 | 0.9576 | 0.9526 | 0.9962 |
| 0.0001 | 110.0 | 2200 | 0.0379 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 0.9565217391304348, 'recall': 0.9565217391304348, 'f1': 0.9565217391304348, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9146341463414634, 'recall': 0.9259259259259259, 'f1': 0.9202453987730062, 'number': 81} | {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9285714285714286, 'recall': 0.9430051813471503, 'f1': 0.9357326478149101, 'number': 193} | 0.9434 | 0.9534 | 0.9484 | 0.9959 |
| 0.0001 | 120.0 | 2400 | 0.0361 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9146341463414634, 'recall': 0.9259259259259259, 'f1': 0.9202453987730062, 'number': 81} | {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9336734693877551, 'recall': 0.9481865284974094, 'f1': 0.9408740359897172, 'number': 193} | 0.9476 | 0.9576 | 0.9526 | 0.9962 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "SCUT-DLVCLab/lilt-roberta-en-base", "model-index": [{"name": "lilt-en-aadhaar-red", "results": []}]} | prashantloni/lilt-en-aadhaar-red | null | [
"transformers",
"tensorboard",
"safetensors",
"lilt",
"token-classification",
"generated_from_trainer",
"base_model:SCUT-DLVCLab/lilt-roberta-en-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:18:41+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #lilt #token-classification #generated_from_trainer #base_model-SCUT-DLVCLab/lilt-roberta-en-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| lilt-en-aadhaar-red
===================
This model is a fine-tuned version of SCUT-DLVCLab/lilt-roberta-en-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0287
* Adhaar Number: {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39}
* Ame: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23}
* Ather Name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2}
* Ather Name Back: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19}
* Ather Name Front Top: {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11}
* Ddress Back: {'precision': 0.9512195121951219, 'recall': 0.9629629629629629, 'f1': 0.9570552147239264, 'number': 81}
* Ddress Front: {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52}
* Ender: {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21}
* Ob: {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21}
* Obile Number: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10}
* Ther: {'precision': 0.958974358974359, 'recall': 0.9689119170984456, 'f1': 0.9639175257731959, 'number': 193}
* Overall Precision: 0.9623
* Overall Recall: 0.9725
* Overall F1: 0.9673
* Overall Accuracy: 0.9973
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 2500
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2500\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #lilt #token-classification #generated_from_trainer #base_model-SCUT-DLVCLab/lilt-roberta-en-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2500\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | notbdq/distilgt2-turkish | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T13:19:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** lvchongen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | lvchongen/demo_model | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:19:06+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: lvchongen
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: lvchongen\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: lvchongen\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | notbdq/distilgpt2-turkish | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:19:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Grayx/sad_llama_37 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T13:22:12+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
sentence-similarity | sentence-transformers |
# Randstad/LaBSe_GCP
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Randstad/LaBSe_GCP')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Randstad/LaBSe_GCP)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2813 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 703,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "warmupcosine",
"steps_per_epoch": null,
"warmup_steps": 1406,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Randstad/LaBSe_GCP | null | [
"sentence-transformers",
"safetensors",
"LaBSe",
"feature-extraction",
"sentence-similarity",
"custom_code",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:23:20+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #LaBSe #feature-extraction #sentence-similarity #custom_code #endpoints_compatible #region-us
|
# Randstad/LaBSe_GCP
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 2813 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# Randstad/LaBSe_GCP\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 2813 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #LaBSe #feature-extraction #sentence-similarity #custom_code #endpoints_compatible #region-us \n",
"# Randstad/LaBSe_GCP\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 2813 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_2024-04-24-13-17-50
This model is a fine-tuned version of [baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B) on the alpaca_gpt4_zh and the alpaca_zh datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2 | {"license": "other", "library_name": "peft", "tags": ["llama-factory", "lora", "generated_from_trainer"], "base_model": "baichuan-inc/Baichuan-7B", "model-index": [{"name": "train_2024-04-24-13-17-50", "results": []}]} | Sylvia2025/baichuan-7B-alpaca-gpt4-zh | null | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:baichuan-inc/Baichuan-7B",
"license:other",
"region:us"
] | null | 2024-04-24T13:24:24+00:00 | [] | [] | TAGS
#peft #safetensors #llama-factory #lora #generated_from_trainer #base_model-baichuan-inc/Baichuan-7B #license-other #region-us
|
# train_2024-04-24-13-17-50
This model is a fine-tuned version of baichuan-inc/Baichuan-7B on the alpaca_gpt4_zh and the alpaca_zh datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2 | [
"# train_2024-04-24-13-17-50\n\nThis model is a fine-tuned version of baichuan-inc/Baichuan-7B on the alpaca_gpt4_zh and the alpaca_zh datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.37.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #llama-factory #lora #generated_from_trainer #base_model-baichuan-inc/Baichuan-7B #license-other #region-us \n",
"# train_2024-04-24-13-17-50\n\nThis model is a fine-tuned version of baichuan-inc/Baichuan-7B on the alpaca_gpt4_zh and the alpaca_zh datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.37.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6635
- Accuracy: 0.0265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6410 | 0.0531 |
| No log | 1.8667 | 7 | 2.6430 | 0.0442 |
| 2.636 | 2.9333 | 11 | 2.6526 | 0.0531 |
| 2.636 | 4.0 | 15 | 2.6547 | 0.0177 |
| 2.636 | 4.8 | 18 | 2.6617 | 0.0354 |
| 2.6231 | 5.8667 | 22 | 2.6623 | 0.0354 |
| 2.6231 | 6.9333 | 26 | 2.6636 | 0.0265 |
| 2.61 | 8.0 | 30 | 2.6635 | 0.0265 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["minds14"], "metrics": ["accuracy"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "my_awesome_mind_model", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "minds14", "type": "minds14", "config": "en-US", "split": "train", "args": "en-US"}, "metrics": [{"type": "accuracy", "value": 0.02654867256637168, "name": "Accuracy"}]}]}]} | ALIGHASEMI931/my_awesome_mind_model | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:25:21+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-minds14 #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us
| my\_awesome\_mind\_model
========================
This model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset.
It achieves the following results on the evaluation set:
* Loss: 2.6635
* Accuracy: 0.0265
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-minds14 #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hossniper/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | hossniper/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-24T13:28:06+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "outputs", "results": []}]} | BenjaminTT/outputs | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2024-04-24T13:29:15+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us
|
# outputs
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# outputs\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- training_steps: 200\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu118\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us \n",
"# outputs\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- training_steps: 200\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu118\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4079
- Rouge1: 0.1935
- Rouge2: 0.0918
- Rougel: 0.1631
- Rougelsum: 0.1629
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.4772 | 0.1595 | 0.0642 | 0.1328 | 0.1326 | 19.0 |
| No log | 2.0 | 124 | 2.4328 | 0.1864 | 0.087 | 0.1582 | 0.1578 | 19.0 |
| No log | 3.0 | 186 | 2.4154 | 0.1933 | 0.0916 | 0.163 | 0.1627 | 19.0 |
| No log | 4.0 | 248 | 2.4079 | 0.1935 | 0.0918 | 0.1631 | 0.1629 | 19.0 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "summarization_model", "results": []}]} | umairaziz719/summarization_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T13:29:36+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| summarization\_model
====================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4079
* Rouge1: 0.1935
* Rouge2: 0.0918
* Rougel: 0.1631
* Rougelsum: 0.1629
* Gen Len: 19.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null |
# apitchai/Mixtral-8x7B-Instruct-v0.1-Q4_0-GGUF
This model was converted to GGUF format from [`mistralai/Mixtral-8x7B-Instruct-v0.1`](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo apitchai/Mixtral-8x7B-Instruct-v0.1-Q4_0-GGUF --model mixtral-8x7b-instruct-v0.1.Q4_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo apitchai/Mixtral-8x7B-Instruct-v0.1-Q4_0-GGUF --model mixtral-8x7b-instruct-v0.1.Q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral-8x7b-instruct-v0.1.Q4_0.gguf -n 128
```
| {"language": ["fr", "it", "de", "es", "en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "inference": {"parameters": {"temperature": 0.5}}, "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | apitchai/Mixtral-8x7B-Instruct-v0.1-Q4_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T13:30:55+00:00 | [] | [
"fr",
"it",
"de",
"es",
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #fr #it #de #es #en #license-apache-2.0 #region-us
|
# apitchai/Mixtral-8x7B-Instruct-v0.1-Q4_0-GGUF
This model was converted to GGUF format from 'mistralai/Mixtral-8x7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# apitchai/Mixtral-8x7B-Instruct-v0.1-Q4_0-GGUF\nThis model was converted to GGUF format from 'mistralai/Mixtral-8x7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #fr #it #de #es #en #license-apache-2.0 #region-us \n",
"# apitchai/Mixtral-8x7B-Instruct-v0.1-Q4_0-GGUF\nThis model was converted to GGUF format from 'mistralai/Mixtral-8x7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sanchit-gandhi/Mistral-7B-v0.1-6-layer
This model is a fine-tuned version of [sanchit-gandhi/Mistral-7B-v0.1-6-layer](https://huggingface.co/sanchit-gandhi/Mistral-7B-v0.1-6-layer) on the stingning/ultrachat dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 20000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 1.135 | 1.2361 | 5000 | 1.0484 |
| 0.9717 | 2.4722 | 10000 | 1.0058 |
| 0.8643 | 3.7083 | 15000 | 0.9966 |
| 0.8191 | 4.9444 | 20000 | 1.0042 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "alignment-handbook", "generated_from_trainer"], "datasets": ["stingning/ultrachat"], "base_model": "sanchit-gandhi/Mistral-7B-v0.1-6-layer", "model-index": [{"name": "sanchit-gandhi/Mistral-7B-v0.1-6-layer", "results": []}]} | sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:stingning/ultrachat",
"base_model:sanchit-gandhi/Mistral-7B-v0.1-6-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T13:31:31+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-stingning/ultrachat #base_model-sanchit-gandhi/Mistral-7B-v0.1-6-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| sanchit-gandhi/Mistral-7B-v0.1-6-layer
======================================
This model is a fine-tuned version of sanchit-gandhi/Mistral-7B-v0.1-6-layer on the stingning/ultrachat dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0042
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 8
* total\_train\_batch\_size: 256
* total\_eval\_batch\_size: 256
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 20000
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.2+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 256\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 20000",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-stingning/ultrachat #base_model-sanchit-gandhi/Mistral-7B-v0.1-6-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 256\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 20000",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | JFernandoGRE/mixtral8x7binstruct_augmenteddemocracy_adapter | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:31:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "245.96 +/- 25.48", "name": "mean_reward", "verified": false}]}]}]} | JBERN29/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-24T13:34:03+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | null |
# luca10g/Suzume-llama-3-8B-multilingual-Q6K-GGUF
**At the time of converting this model, there was no Q6K version available.**
This model was converted to GGUF format from [`lightblue/suzume-llama-3-8B-multilingual`](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) for more details on the model. | {"license": "other", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "license_name": "llama-3", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE", "model-index": [{"name": "lightblue/suzume-llama-3-8B-multilingual", "results": []}]} | luca10g/Suzume-llama-3-8B-multilingual-Q6K-GGUF | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-04-24T13:34:06+00:00 | [] | [] | TAGS
#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
|
# luca10g/Suzume-llama-3-8B-multilingual-Q6K-GGUF
At the time of converting this model, there was no Q6K version available.
This model was converted to GGUF format from 'lightblue/suzume-llama-3-8B-multilingual' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model. | [
"# luca10g/Suzume-llama-3-8B-multilingual-Q6K-GGUF\n\nAt the time of converting this model, there was no Q6K version available.\n\nThis model was converted to GGUF format from 'lightblue/suzume-llama-3-8B-multilingual' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model."
] | [
"TAGS\n#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"# luca10g/Suzume-llama-3-8B-multilingual-Q6K-GGUF\n\nAt the time of converting this model, there was no Q6K version available.\n\nThis model was converted to GGUF format from 'lightblue/suzume-llama-3-8B-multilingual' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model."
] |
text-generation | transformers |
> [!TIP]
> This is official GPTQ. Quantized using train data.
# LYNN - AI for Roleplay
<img src="./reallynn.png" alt="it's lynn!" width="340"/>
> [!TIP]
> This model is overfitted to the role-playing dataset; normal conversations may not work well.
# Soliloquy-L3
Soliloquy-L3 is a fast, highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, Soliloquy-L3 has a vast knowledge base, rich literary expression, and support for up to 24k context length. It outperforms existing ~13B models, delivering enhanced roleplaying capabilities.
## Model Info
| Context Length | Parameter | Prompt Template | isErp |
| --- | --- | --- | --- |
| 24k(24576) | 8B | Llama 3 Chat | Partly |
## Prompt Template
Use can you following jinja2 template. Which is identical to chat_template in [tokenizer_config](./tokenizer_config.json).
```
{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}
```
## License
This model is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
If you would like to use this model for commercial purposes, please use our proprietary API. (Currently avilable at OpenRouter)
For non-commercial use, please adhere to the terms of the CC BY-NC-SA 4.0 license. You are free to share and adapt the model for non-commercial purposes, provided you give appropriate credit, indicate if changes were made, and do not imply endorsement by the licensor.
For more information about the CC BY-NC 4.0 license, please visit: https://creativecommons.org/licenses/by-nc-sa/4.0/
If you have any questions or would like to inquire about licensing, please contact us.
## Llama 3 Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
[https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
## Join our Discord
[**Join LYNN Discord**](https://discord.gg/xuZVqUyG4Y) | {"language": ["en"], "license": "cc-by-nc-sa-4.0"} | openlynn/Llama-3-Soliloquy-8B-GPTQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-24T13:34:12+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
>
> [!TIP]
> This is official GPTQ. Quantized using train data.
>
>
>
LYNN - AI for Roleplay
======================

>
> [!TIP]
> This model is overfitted to the role-playing dataset; normal conversations may not work well.
>
>
>
Soliloquy-L3
============
Soliloquy-L3 is a fast, highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, Soliloquy-L3 has a vast knowledge base, rich literary expression, and support for up to 24k context length. It outperforms existing ~13B models, delivering enhanced roleplaying capabilities.
Model Info
----------
Prompt Template
---------------
Use can you following jinja2 template. Which is identical to chat\_template in tokenizer\_config.
License
-------
This model is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under META LLAMA 3 COMMUNITY LICENSE AGREEMENT
If you would like to use this model for commercial purposes, please use our proprietary API. (Currently avilable at OpenRouter)
For non-commercial use, please adhere to the terms of the CC BY-NC-SA 4.0 license. You are free to share and adapt the model for non-commercial purposes, provided you give appropriate credit, indicate if changes were made, and do not imply endorsement by the licensor.
For more information about the CC BY-NC 4.0 license, please visit: URL
If you have any questions or would like to inquire about licensing, please contact us.
Llama 3 Intended Use
--------------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
URL
Join our Discord
----------------
Join LYNN Discord
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/yzhuang/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "yzhuang/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2", "quantized_by": "mradermacher"} | mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF | null | [
"transformers",
"gguf",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:generator",
"base_model:yzhuang/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:34:26+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #trl #sft #generated_from_trainer #en #dataset-generator #base_model-yzhuang/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2 #license-other #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #trl #sft #generated_from_trainer #en #dataset-generator #base_model-yzhuang/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2 #license-other #endpoints_compatible #region-us \n"
] |
text-generation | transformers | I dunno what I did. I kind of hecked together "Undi95/Llama-3-Unholy-8B-e4" and "dreamgen/WizardLM-2-7B".

I dont even know if it works. | {} | zuzuka17/LaZardy3_7.3B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T13:36:35+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| I dunno what I did. I kind of hecked together "Undi95/Llama-3-Unholy-8B-e4" and "dreamgen/WizardLM-2-7B".
!Fox1
I dont even know if it works. | [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification | transformers |
# Spanish Fake News Classifier
## Overview
This BERT-based text classifier was developed as a thesis project for the Computer Engineering degree at Universidad de Buenos Aires (UBA).
The model is designed to detect fake news in Spanish and was fine-tuned on the *dccuchile/bert-base-spanish-wwm-uncased* model using a specific set of hyperparameters.
It was trained on a dataset containing 125,000 Spanish news articles collected from various regions, both true and false.
## Team Members
- **[Azul Fuentes](https://github.com/azu26)**
- **[Dante Reinaudo](https://github.com/DanteReinaudo)**
- **[Lucía Pardo](https://github.com/luciaPardo)**
- **[Roberto Iskandarani](https://github.com/Robert-Iskandarani)**
## Model Details
* **Base Mode**: dccuchile/bert-base-spanish-wwm-uncased
* **Hyperparameters**:
* **dropout_rate = 0.1**
* **num_classes = 2**
* **max_length = 128**
* **batch_size = 16**
* **num_epochs = 5**
* **learning_rate = 3e-5**
* **Dataset**: 125,000 Spanish news articles (True and False)
## Metrics
The model's performance was evaluated using the following metrics:
* **Accuracy = _83.17%_**
* **F1-Score = _81.94%_**
* **Precision = _85.62%_**
* **Recall = _81.10%_**
## Usage
### Installation
You can install the required dependencies using pip:
```bash
pip install transformers torch
```
### Loading the Model
```python
from transformers import BertForSequenceClassification, BertTokenizer
model = BertForSequenceClassification.from_pretrained("VerificadoProfesional/SaBERT-Spanish-Fake-News")
tokenizer = BertTokenizer.from_pretrained("VerificadoProfesional/SaBERT-Spanish-Fake-News")
```
### Predict Function
```python
def predict(model,tokenizer,text,threshold = 0.5):
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probabilities = torch.softmax(logits, dim=1).squeeze().tolist()
predicted_class = torch.argmax(logits, dim=1).item()
if probabilities[predicted_class] <= threshold and predicted_class == 1:
predicted_class = 0
return bool(predicted_class), probabilities
```
### Making Predictions
```python
text = "Your Spanish news text here"
predicted_label,probabilities = predict(model,tokenizer,text)
print(f"Text: {text}")
print(f"Predicted Class: {predicted_label}")
print(f"Probabilities: {probabilities}")
```
## License
Apache License 2.0
## Acknowledgments
Special thanks to DCC UChile for the base Spanish BERT model and to all contributors to the dataset used for training.
| {"language": ["es"], "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "widget": [{"text": "La tierra es Plana", "output": [{"label": "False", "score": 0.882}, {"label": "True", "score": 0.118}]}]} | VerificadoProfesional/SaBERT-Spanish-Fake-News | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:38:23+00:00 | [] | [
"es"
] | TAGS
#transformers #safetensors #bert #text-classification #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Spanish Fake News Classifier
## Overview
This BERT-based text classifier was developed as a thesis project for the Computer Engineering degree at Universidad de Buenos Aires (UBA).
The model is designed to detect fake news in Spanish and was fine-tuned on the *dccuchile/bert-base-spanish-wwm-uncased* model using a specific set of hyperparameters.
It was trained on a dataset containing 125,000 Spanish news articles collected from various regions, both true and false.
## Team Members
- Azul Fuentes
- Dante Reinaudo
- Lucía Pardo
- Roberto Iskandarani
## Model Details
* Base Mode: dccuchile/bert-base-spanish-wwm-uncased
* Hyperparameters:
* dropout_rate = 0.1
* num_classes = 2
* max_length = 128
* batch_size = 16
* num_epochs = 5
* learning_rate = 3e-5
* Dataset: 125,000 Spanish news articles (True and False)
## Metrics
The model's performance was evaluated using the following metrics:
* Accuracy = _83.17%_
* F1-Score = _81.94%_
* Precision = _85.62%_
* Recall = _81.10%_
## Usage
### Installation
You can install the required dependencies using pip:
### Loading the Model
### Predict Function
### Making Predictions
## License
Apache License 2.0
## Acknowledgments
Special thanks to DCC UChile for the base Spanish BERT model and to all contributors to the dataset used for training.
| [
"# Spanish Fake News Classifier",
"## Overview\nThis BERT-based text classifier was developed as a thesis project for the Computer Engineering degree at Universidad de Buenos Aires (UBA). \nThe model is designed to detect fake news in Spanish and was fine-tuned on the *dccuchile/bert-base-spanish-wwm-uncased* model using a specific set of hyperparameters. \nIt was trained on a dataset containing 125,000 Spanish news articles collected from various regions, both true and false.",
"## Team Members\n- Azul Fuentes\n- Dante Reinaudo \n- Lucía Pardo\n- Roberto Iskandarani",
"## Model Details\n* Base Mode: dccuchile/bert-base-spanish-wwm-uncased\n* Hyperparameters: \n * dropout_rate = 0.1\n * num_classes = 2\n * max_length = 128\n * batch_size = 16\n * num_epochs = 5\n * learning_rate = 3e-5\n \n* Dataset: 125,000 Spanish news articles (True and False)",
"## Metrics\nThe model's performance was evaluated using the following metrics:\n\n * Accuracy = _83.17%_\n * F1-Score = _81.94%_\n * Precision = _85.62%_\n * Recall = _81.10%_",
"## Usage",
"### Installation\nYou can install the required dependencies using pip:",
"### Loading the Model",
"### Predict Function",
"### Making Predictions",
"## License\nApache License 2.0",
"## Acknowledgments\nSpecial thanks to DCC UChile for the base Spanish BERT model and to all contributors to the dataset used for training."
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Spanish Fake News Classifier",
"## Overview\nThis BERT-based text classifier was developed as a thesis project for the Computer Engineering degree at Universidad de Buenos Aires (UBA). \nThe model is designed to detect fake news in Spanish and was fine-tuned on the *dccuchile/bert-base-spanish-wwm-uncased* model using a specific set of hyperparameters. \nIt was trained on a dataset containing 125,000 Spanish news articles collected from various regions, both true and false.",
"## Team Members\n- Azul Fuentes\n- Dante Reinaudo \n- Lucía Pardo\n- Roberto Iskandarani",
"## Model Details\n* Base Mode: dccuchile/bert-base-spanish-wwm-uncased\n* Hyperparameters: \n * dropout_rate = 0.1\n * num_classes = 2\n * max_length = 128\n * batch_size = 16\n * num_epochs = 5\n * learning_rate = 3e-5\n \n* Dataset: 125,000 Spanish news articles (True and False)",
"## Metrics\nThe model's performance was evaluated using the following metrics:\n\n * Accuracy = _83.17%_\n * F1-Score = _81.94%_\n * Precision = _85.62%_\n * Recall = _81.10%_",
"## Usage",
"### Installation\nYou can install the required dependencies using pip:",
"### Loading the Model",
"### Predict Function",
"### Making Predictions",
"## License\nApache License 2.0",
"## Acknowledgments\nSpecial thanks to DCC UChile for the base Spanish BERT model and to all contributors to the dataset used for training."
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_synDB_da
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3963 | 0.82 | 50 | 0.1464 |
| 0.1987 | 1.22 | 75 | 0.1222 |
| 0.1286 | 1.63 | 100 | 0.0964 |
| 0.1132 | 2.04 | 125 | 0.1117 |
| 0.0803 | 2.45 | 150 | 0.0801 |
| 0.068 | 2.86 | 175 | 0.0804 |
| 0.0567 | 3.27 | 200 | 0.0521 |
| 0.0495 | 3.67 | 225 | 0.0727 |
| 0.0436 | 4.08 | 250 | 0.0681 |
| 0.0425 | 4.49 | 275 | 0.0754 |
| 0.0361 | 4.9 | 300 | 0.0747 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "donut_synDB_da", "results": []}]} | Donut01/donut_synDB_da | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:42:27+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
| donut\_synDB\_da
================
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0747
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-med-LoRA_nosie_128_256_45k
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1914
- Wer: 8.6601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.5111 | 1.0 | 2863 | 0.2845 | 11.9909 |
| 0.2265 | 2.0 | 5726 | 0.2335 | 10.3921 |
| 0.1772 | 3.0 | 8589 | 0.2106 | 9.4024 |
| 0.1495 | 4.0 | 11452 | 0.1959 | 9.0027 |
| 0.1331 | 5.0 | 14315 | 0.1914 | 8.6601 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-medium", "model-index": [{"name": "whisper-med-LoRA_nosie_128_256_45k", "results": []}]} | adityarra07/whisper-med-LoRA_nosie_128_256_45k | null | [
"generated_from_trainer",
"base_model:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T13:42:35+00:00 | [] | [] | TAGS
#generated_from_trainer #base_model-openai/whisper-medium #license-apache-2.0 #region-us
| whisper-med-LoRA\_nosie\_128\_256\_45k
======================================
This model is a fine-tuned version of openai/whisper-medium on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1914
* Wer: 8.6601
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.33.1
* Pytorch 2.0.1+cu117
* Datasets 2.14.5
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.33.1\n* Pytorch 2.0.1+cu117\n* Datasets 2.14.5\n* Tokenizers 0.13.3"
] | [
"TAGS\n#generated_from_trainer #base_model-openai/whisper-medium #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.33.1\n* Pytorch 2.0.1+cu117\n* Datasets 2.14.5\n* Tokenizers 0.13.3"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ThatOneSkyler/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | ThatOneSkyler/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null | 2024-04-24T13:43:14+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: ThatOneSkyler/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: ThatOneSkyler/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: ThatOneSkyler/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
null | transformers | # Jina V2 Embed Model
Reupload of the jina embedding model that removes the dependence on onnx and optimum, by recreating it with a custom class in Takeoff.
| {} | TitanML/jina-v2-code-embed | null | [
"transformers",
"safetensors",
"bert",
"custom_code",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:43:34+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #custom_code #endpoints_compatible #region-us
| # Jina V2 Embed Model
Reupload of the jina embedding model that removes the dependence on onnx and optimum, by recreating it with a custom class in Takeoff.
| [
"# Jina V2 Embed Model\n\nReupload of the jina embedding model that removes the dependence on onnx and optimum, by recreating it with a custom class in Takeoff."
] | [
"TAGS\n#transformers #safetensors #bert #custom_code #endpoints_compatible #region-us \n",
"# Jina V2 Embed Model\n\nReupload of the jina embedding model that removes the dependence on onnx and optimum, by recreating it with a custom class in Takeoff."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLAMA3-8BI-APPS
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9027 | 0.1 | 100 | 0.9320 |
| 0.8632 | 0.2 | 200 | 0.9143 |
| 0.8572 | 0.3 | 300 | 1.0150 |
| 0.937 | 0.4 | 400 | 1.0545 |
| 1.0336 | 0.5 | 500 | 1.1029 |
| 1.0056 | 0.6 | 600 | 1.1267 |
| 1.0125 | 0.7 | 700 | 1.1307 |
| 1.028 | 0.8 | 800 | 1.1398 |
| 1.0692 | 0.9 | 900 | 1.1482 |
| 1.0361 | 1.0 | 1000 | 1.1490 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "other", "library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "LLAMA3-8BI-APPS", "results": []}]} | AdnanRiaz107/CodeLLAMA3-8BI-APPS | null | [
"peft",
"safetensors",
"llama",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-04-24T13:43:38+00:00 | [] | [] | TAGS
#peft #safetensors #llama #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
| LLAMA3-8BI-APPS
===============
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1490
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-06
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #llama #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
feature-extraction | sentence-transformers | <!-- TODO: add evaluation results here -->
<br><br>
<p align="center">
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
## Quick Start
The easiest way to starting using `jina-embeddings-v2-base-en` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/).
## Intended Usage & Model Info
`jina-embeddings-v2-base-en` is an English, monolingual **embedding model** supporting **8192 sequence length**.
It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length.
The backbone `jina-bert-v2-base-en` is pretrained on the C4 dataset.
The model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives.
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi.
This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc.
With a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference.
Additionally, we provide the following embedding models:
- [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters.
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters **(you are here)**.
- [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): Chinese-English Bilingual embeddings.
- [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): German-English Bilingual embeddings.
- [`jina-embeddings-v2-base-es`](https://huggingface.co/jinaai/jina-embeddings-v2-base-es): Spanish-English Bilingual embeddings.
## Data & Parameters
Jina Embeddings V2 [technical report](https://arxiv.org/abs/2310.19923)
## Usage
**<details><summary>Please apply mean pooling when integrating the model.</summary>**
<p>
### Why mean pooling?
`mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level.
It has been proved to be the most effective way to produce high-quality sentence embeddings.
We offer an `encode` function to deal with this.
However, if you would like to do it without using the default `encode` function:
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['How is the weather today?', 'What is the current weather like today?']
tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-small-en')
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-small-en', trust_remote_code=True)
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
```
</p>
</details>
You can use Jina Embedding models directly from transformers package.
First, you need to make sure that you are logged into huggingface. You can either use the huggingface-cli tool (after installing the `transformers` package) and pass your [hugginface access token](https://huggingface.co/docs/hub/security-tokens):
```bash
huggingface-cli login
```
Alternatively, you can provide the access token as an environment variable in the shell:
```bash
export HF_TOKEN="<your token here>"
```
or in Python:
```python
import os
os.environ['HF_TOKEN'] = "<your token here>"
```
Then, you can use load and use the model via the `AutoModel` class:
```python
!pip install transformers
from transformers import AutoModel
from numpy.linalg import norm
cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True) # trust_remote_code is needed to use the encode method
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?'])
print(cos_sim(embeddings[0], embeddings[1]))
```
If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function:
```python
embeddings = model.encode(
['Very long ... document'],
max_length=2048
)
```
Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well):
```python
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
"jinaai/jina-embeddings-v2-base-en", # switch to en/zh for English or Chinese
trust_remote_code=True
)
# control your input sequence length up to 8192
model.max_seq_length = 1024
embeddings = model.encode([
'How is the weather today?',
'What is the current weather like today?'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
## Alternatives to Using Transformers (or SentencTransformers) Package
1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/).
2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy).
## Use Jina Embeddings for RAG
According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83),
> In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out.
<img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px">
## Plans
1. Bilingual embedding models supporting more European & Asian languages, including Spanish, French, Italian and Japanese.
2. Multimodal embedding models enable Multimodal RAG applications.
3. High-performt rerankers.
## Trouble Shooting
**Loading of Model Code failed**
If you forgot to pass the `trust_remote_code=True` flag when calling `AutoModel.from_pretrained` or initializing the model via the `SentenceTransformer` class, you will receive an error that the model weights could not be initialized.
This is caused by tranformers falling back to creating a default BERT model, instead of a jina-embedding model:
```bash
Some weights of the model checkpoint at jinaai/jina-embeddings-v2-base-en were not used when initializing BertModel: ['encoder.layer.2.mlp.layernorm.weight', 'encoder.layer.3.mlp.layernorm.weight', 'encoder.layer.10.mlp.wo.bias', 'encoder.layer.5.mlp.wo.bias', 'encoder.layer.2.mlp.layernorm.bias', 'encoder.layer.1.mlp.gated_layers.weight', 'encoder.layer.5.mlp.gated_layers.weight', 'encoder.layer.8.mlp.layernorm.bias', ...
```
**User is not logged into Huggingface**
The model is only availabe under [gated access](https://huggingface.co/docs/hub/models-gated).
This means you need to be logged into huggingface load load it.
If you receive the following error, you need to provide an access token, either by using the huggingface-cli or providing the token via an environment variable as described above:
```bash
OSError: jinaai/jina-embeddings-v2-base-en is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
```
@misc{günther2023jina,
title={Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents},
author={Michael Günther and Jackmin Ong and Isabelle Mohr and Alaeddine Abdessalem and Tanguy Abel and Mohammad Kalim Akram and Susana Guzman and Georgios Mastrapas and Saba Sturua and Bo Wang and Maximilian Werk and Nan Wang and Han Xiao},
year={2023},
eprint={2310.19923},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "en", "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "datasets": ["allenai/c4"], "inference": false, "model-index": [{"name": "jina-embedding-b-en-v2", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 74.73134328358209}, {"type": "ap", "value": 37.765427081831035}, {"type": "f1", "value": 68.79367444339518}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 88.544275}, {"type": "ap", "value": 84.61328675662887}, {"type": "f1", "value": 88.51879035862375}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 45.263999999999996}, {"type": "f1", "value": 43.778759656699435}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.693}, {"type": "map_at_10", "value": 35.487}, {"type": "map_at_100", "value": 36.862}, {"type": "map_at_1000", "value": 36.872}, {"type": "map_at_3", "value": 30.049999999999997}, {"type": "map_at_5", "value": 32.966}, {"type": "mrr_at_1", "value": 21.977}, {"type": "mrr_at_10", "value": 35.565999999999995}, {"type": "mrr_at_100", "value": 36.948}, {"type": "mrr_at_1000", "value": 36.958}, {"type": "mrr_at_3", "value": 30.121}, {"type": "mrr_at_5", "value": 33.051}, {"type": "ndcg_at_1", "value": 21.693}, {"type": "ndcg_at_10", "value": 44.181}, {"type": "ndcg_at_100", "value": 49.982}, {"type": "ndcg_at_1000", "value": 50.233000000000004}, {"type": "ndcg_at_3", "value": 32.830999999999996}, {"type": "ndcg_at_5", "value": 38.080000000000005}, {"type": "precision_at_1", "value": 21.693}, {"type": "precision_at_10", "value": 7.248}, {"type": "precision_at_100", "value": 0.9769999999999999}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 13.632}, {"type": "precision_at_5", "value": 10.725}, {"type": "recall_at_1", "value": 21.693}, {"type": "recall_at_10", "value": 72.475}, {"type": "recall_at_100", "value": 97.653}, {"type": "recall_at_1000", "value": 99.57300000000001}, {"type": "recall_at_3", "value": 40.896}, {"type": "recall_at_5", "value": 53.627}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 45.39242428696777}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 36.675626784714}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 62.247725694904034}, {"type": "mrr", "value": 74.91359978894604}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.68003802970496}, {"type": "cos_sim_spearman", "value": 81.23438110096286}, {"type": "euclidean_pearson", "value": 81.87462986142582}, {"type": "euclidean_spearman", "value": 81.23438110096286}, {"type": "manhattan_pearson", "value": 81.61162566600755}, {"type": "manhattan_spearman", "value": 81.11329400456184}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 84.01298701298701}, {"type": "f1", "value": 83.31690714969382}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 37.050108150972086}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 30.15731442819715}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 31.391999999999996}, {"type": "map_at_10", "value": 42.597}, {"type": "map_at_100", "value": 44.07}, {"type": "map_at_1000", "value": 44.198}, {"type": "map_at_3", "value": 38.957}, {"type": "map_at_5", "value": 40.961}, {"type": "mrr_at_1", "value": 37.196}, {"type": "mrr_at_10", "value": 48.152}, {"type": "mrr_at_100", "value": 48.928}, {"type": "mrr_at_1000", "value": 48.964999999999996}, {"type": "mrr_at_3", "value": 45.446}, {"type": "mrr_at_5", "value": 47.205999999999996}, {"type": "ndcg_at_1", "value": 37.196}, {"type": "ndcg_at_10", "value": 49.089}, {"type": "ndcg_at_100", "value": 54.471000000000004}, {"type": "ndcg_at_1000", "value": 56.385}, {"type": "ndcg_at_3", "value": 43.699}, {"type": "ndcg_at_5", "value": 46.22}, {"type": "precision_at_1", "value": 37.196}, {"type": "precision_at_10", "value": 9.313}, {"type": "precision_at_100", "value": 1.478}, {"type": "precision_at_1000", "value": 0.198}, {"type": "precision_at_3", "value": 20.839}, {"type": "precision_at_5", "value": 14.936}, {"type": "recall_at_1", "value": 31.391999999999996}, {"type": "recall_at_10", "value": 61.876}, {"type": "recall_at_100", "value": 84.214}, {"type": "recall_at_1000", "value": 95.985}, {"type": "recall_at_3", "value": 46.6}, {"type": "recall_at_5", "value": 53.588}, {"type": "map_at_1", "value": 29.083}, {"type": "map_at_10", "value": 38.812999999999995}, {"type": "map_at_100", "value": 40.053}, {"type": "map_at_1000", "value": 40.188}, {"type": "map_at_3", "value": 36.111}, {"type": "map_at_5", "value": 37.519000000000005}, {"type": "mrr_at_1", "value": 36.497}, {"type": "mrr_at_10", "value": 44.85}, {"type": "mrr_at_100", "value": 45.546}, {"type": "mrr_at_1000", "value": 45.593}, {"type": "mrr_at_3", "value": 42.686}, {"type": "mrr_at_5", "value": 43.909}, {"type": "ndcg_at_1", "value": 36.497}, {"type": "ndcg_at_10", "value": 44.443}, {"type": "ndcg_at_100", "value": 48.979}, {"type": "ndcg_at_1000", "value": 51.154999999999994}, {"type": "ndcg_at_3", "value": 40.660000000000004}, {"type": "ndcg_at_5", "value": 42.193000000000005}, {"type": "precision_at_1", "value": 36.497}, {"type": "precision_at_10", "value": 8.433}, {"type": "precision_at_100", "value": 1.369}, {"type": "precision_at_1000", "value": 0.185}, {"type": "precision_at_3", "value": 19.894000000000002}, {"type": "precision_at_5", "value": 13.873}, {"type": "recall_at_1", "value": 29.083}, {"type": "recall_at_10", "value": 54.313}, {"type": "recall_at_100", "value": 73.792}, {"type": "recall_at_1000", "value": 87.629}, {"type": "recall_at_3", "value": 42.257}, {"type": "recall_at_5", "value": 47.066}, {"type": "map_at_1", "value": 38.556000000000004}, {"type": "map_at_10", "value": 50.698}, {"type": "map_at_100", "value": 51.705}, {"type": "map_at_1000", "value": 51.768}, {"type": "map_at_3", "value": 47.848}, {"type": "map_at_5", "value": 49.358000000000004}, {"type": "mrr_at_1", "value": 43.95}, {"type": "mrr_at_10", "value": 54.191}, {"type": "mrr_at_100", "value": 54.852999999999994}, {"type": "mrr_at_1000", "value": 54.885}, {"type": "mrr_at_3", "value": 51.954}, {"type": "mrr_at_5", "value": 53.13}, {"type": "ndcg_at_1", "value": 43.95}, {"type": "ndcg_at_10", "value": 56.516}, {"type": "ndcg_at_100", "value": 60.477000000000004}, {"type": "ndcg_at_1000", "value": 61.746}, {"type": "ndcg_at_3", "value": 51.601}, {"type": "ndcg_at_5", "value": 53.795}, {"type": "precision_at_1", "value": 43.95}, {"type": "precision_at_10", "value": 9.009}, {"type": "precision_at_100", "value": 1.189}, {"type": "precision_at_1000", "value": 0.135}, {"type": "precision_at_3", "value": 22.989}, {"type": "precision_at_5", "value": 15.473}, {"type": "recall_at_1", "value": 38.556000000000004}, {"type": "recall_at_10", "value": 70.159}, {"type": "recall_at_100", "value": 87.132}, {"type": "recall_at_1000", "value": 96.16}, {"type": "recall_at_3", "value": 56.906}, {"type": "recall_at_5", "value": 62.332}, {"type": "map_at_1", "value": 24.238}, {"type": "map_at_10", "value": 32.5}, {"type": "map_at_100", "value": 33.637}, {"type": "map_at_1000", "value": 33.719}, {"type": "map_at_3", "value": 30.026999999999997}, {"type": "map_at_5", "value": 31.555}, {"type": "mrr_at_1", "value": 26.328000000000003}, {"type": "mrr_at_10", "value": 34.44}, {"type": "mrr_at_100", "value": 35.455999999999996}, {"type": "mrr_at_1000", "value": 35.521}, {"type": "mrr_at_3", "value": 32.034}, {"type": "mrr_at_5", "value": 33.565}, {"type": "ndcg_at_1", "value": 26.328000000000003}, {"type": "ndcg_at_10", "value": 37.202}, {"type": "ndcg_at_100", "value": 42.728}, {"type": "ndcg_at_1000", "value": 44.792}, {"type": "ndcg_at_3", "value": 32.368}, {"type": "ndcg_at_5", "value": 35.008}, {"type": "precision_at_1", "value": 26.328000000000003}, {"type": "precision_at_10", "value": 5.7059999999999995}, {"type": "precision_at_100", "value": 0.8880000000000001}, {"type": "precision_at_1000", "value": 0.11100000000000002}, {"type": "precision_at_3", "value": 13.672}, {"type": "precision_at_5", "value": 9.74}, {"type": "recall_at_1", "value": 24.238}, {"type": "recall_at_10", "value": 49.829}, {"type": "recall_at_100", "value": 75.21}, {"type": "recall_at_1000", "value": 90.521}, {"type": "recall_at_3", "value": 36.867}, {"type": "recall_at_5", "value": 43.241}, {"type": "map_at_1", "value": 15.378}, {"type": "map_at_10", "value": 22.817999999999998}, {"type": "map_at_100", "value": 23.977999999999998}, {"type": "map_at_1000", "value": 24.108}, {"type": "map_at_3", "value": 20.719}, {"type": "map_at_5", "value": 21.889}, {"type": "mrr_at_1", "value": 19.03}, {"type": "mrr_at_10", "value": 27.022000000000002}, {"type": "mrr_at_100", "value": 28.011999999999997}, {"type": "mrr_at_1000", "value": 28.096}, {"type": "mrr_at_3", "value": 24.855}, {"type": "mrr_at_5", "value": 26.029999999999998}, {"type": "ndcg_at_1", "value": 19.03}, {"type": "ndcg_at_10", "value": 27.526}, {"type": "ndcg_at_100", "value": 33.040000000000006}, {"type": "ndcg_at_1000", "value": 36.187000000000005}, {"type": "ndcg_at_3", "value": 23.497}, {"type": "ndcg_at_5", "value": 25.334}, {"type": "precision_at_1", "value": 19.03}, {"type": "precision_at_10", "value": 4.963}, {"type": "precision_at_100", "value": 0.893}, {"type": "precision_at_1000", "value": 0.13}, {"type": "precision_at_3", "value": 11.360000000000001}, {"type": "precision_at_5", "value": 8.134}, {"type": "recall_at_1", "value": 15.378}, {"type": "recall_at_10", "value": 38.061}, {"type": "recall_at_100", "value": 61.754}, {"type": "recall_at_1000", "value": 84.259}, {"type": "recall_at_3", "value": 26.788}, {"type": "recall_at_5", "value": 31.326999999999998}, {"type": "map_at_1", "value": 27.511999999999997}, {"type": "map_at_10", "value": 37.429}, {"type": "map_at_100", "value": 38.818000000000005}, {"type": "map_at_1000", "value": 38.924}, {"type": "map_at_3", "value": 34.625}, {"type": "map_at_5", "value": 36.064}, {"type": "mrr_at_1", "value": 33.300999999999995}, {"type": "mrr_at_10", "value": 43.036}, {"type": "mrr_at_100", "value": 43.894}, {"type": "mrr_at_1000", "value": 43.936}, {"type": "mrr_at_3", "value": 40.825}, {"type": "mrr_at_5", "value": 42.028}, {"type": "ndcg_at_1", "value": 33.300999999999995}, {"type": "ndcg_at_10", "value": 43.229}, {"type": "ndcg_at_100", "value": 48.992000000000004}, {"type": "ndcg_at_1000", "value": 51.02100000000001}, {"type": "ndcg_at_3", "value": 38.794000000000004}, {"type": "ndcg_at_5", "value": 40.65}, {"type": "precision_at_1", "value": 33.300999999999995}, {"type": "precision_at_10", "value": 7.777000000000001}, {"type": "precision_at_100", "value": 1.269}, {"type": "precision_at_1000", "value": 0.163}, {"type": "precision_at_3", "value": 18.351}, {"type": "precision_at_5", "value": 12.762}, {"type": "recall_at_1", "value": 27.511999999999997}, {"type": "recall_at_10", "value": 54.788000000000004}, {"type": "recall_at_100", "value": 79.105}, {"type": "recall_at_1000", "value": 92.49199999999999}, {"type": "recall_at_3", "value": 41.924}, {"type": "recall_at_5", "value": 47.026}, {"type": "map_at_1", "value": 24.117}, {"type": "map_at_10", "value": 33.32}, {"type": "map_at_100", "value": 34.677}, {"type": "map_at_1000", "value": 34.78}, {"type": "map_at_3", "value": 30.233999999999998}, {"type": "map_at_5", "value": 31.668000000000003}, {"type": "mrr_at_1", "value": 29.566}, {"type": "mrr_at_10", "value": 38.244}, {"type": "mrr_at_100", "value": 39.245000000000005}, {"type": "mrr_at_1000", "value": 39.296}, {"type": "mrr_at_3", "value": 35.864000000000004}, {"type": "mrr_at_5", "value": 36.919999999999995}, {"type": "ndcg_at_1", "value": 29.566}, {"type": "ndcg_at_10", "value": 39.127}, {"type": "ndcg_at_100", "value": 44.989000000000004}, {"type": "ndcg_at_1000", "value": 47.189}, {"type": "ndcg_at_3", "value": 34.039}, {"type": "ndcg_at_5", "value": 35.744}, {"type": "precision_at_1", "value": 29.566}, {"type": "precision_at_10", "value": 7.385999999999999}, {"type": "precision_at_100", "value": 1.204}, {"type": "precision_at_1000", "value": 0.158}, {"type": "precision_at_3", "value": 16.286}, {"type": "precision_at_5", "value": 11.484}, {"type": "recall_at_1", "value": 24.117}, {"type": "recall_at_10", "value": 51.559999999999995}, {"type": "recall_at_100", "value": 77.104}, {"type": "recall_at_1000", "value": 91.79899999999999}, {"type": "recall_at_3", "value": 36.82}, {"type": "recall_at_5", "value": 41.453}, {"type": "map_at_1", "value": 25.17625}, {"type": "map_at_10", "value": 34.063916666666664}, {"type": "map_at_100", "value": 35.255500000000005}, {"type": "map_at_1000", "value": 35.37275}, {"type": "map_at_3", "value": 31.351666666666667}, {"type": "map_at_5", "value": 32.80608333333333}, {"type": "mrr_at_1", "value": 29.59783333333333}, {"type": "mrr_at_10", "value": 38.0925}, {"type": "mrr_at_100", "value": 38.957249999999995}, {"type": "mrr_at_1000", "value": 39.01608333333333}, {"type": "mrr_at_3", "value": 35.77625}, {"type": "mrr_at_5", "value": 37.04991666666667}, {"type": "ndcg_at_1", "value": 29.59783333333333}, {"type": "ndcg_at_10", "value": 39.343666666666664}, {"type": "ndcg_at_100", "value": 44.488249999999994}, {"type": "ndcg_at_1000", "value": 46.83358333333334}, {"type": "ndcg_at_3", "value": 34.69708333333333}, {"type": "ndcg_at_5", "value": 36.75075}, {"type": "precision_at_1", "value": 29.59783333333333}, {"type": "precision_at_10", "value": 6.884083333333332}, {"type": "precision_at_100", "value": 1.114}, {"type": "precision_at_1000", "value": 0.15108333333333332}, {"type": "precision_at_3", "value": 15.965250000000003}, {"type": "precision_at_5", "value": 11.246500000000001}, {"type": "recall_at_1", "value": 25.17625}, {"type": "recall_at_10", "value": 51.015999999999984}, {"type": "recall_at_100", "value": 73.60174999999998}, {"type": "recall_at_1000", "value": 89.849}, {"type": "recall_at_3", "value": 37.88399999999999}, {"type": "recall_at_5", "value": 43.24541666666666}, {"type": "map_at_1", "value": 24.537}, {"type": "map_at_10", "value": 31.081999999999997}, {"type": "map_at_100", "value": 32.042}, {"type": "map_at_1000", "value": 32.141}, {"type": "map_at_3", "value": 29.137}, {"type": "map_at_5", "value": 30.079}, {"type": "mrr_at_1", "value": 27.454}, {"type": "mrr_at_10", "value": 33.694}, {"type": "mrr_at_100", "value": 34.579}, {"type": "mrr_at_1000", "value": 34.649}, {"type": "mrr_at_3", "value": 32.004}, {"type": "mrr_at_5", "value": 32.794000000000004}, {"type": "ndcg_at_1", "value": 27.454}, {"type": "ndcg_at_10", "value": 34.915}, {"type": "ndcg_at_100", "value": 39.641}, {"type": "ndcg_at_1000", "value": 42.105}, {"type": "ndcg_at_3", "value": 31.276}, {"type": "ndcg_at_5", "value": 32.65}, {"type": "precision_at_1", "value": 27.454}, {"type": "precision_at_10", "value": 5.337}, {"type": "precision_at_100", "value": 0.8250000000000001}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 13.241}, {"type": "precision_at_5", "value": 8.895999999999999}, {"type": "recall_at_1", "value": 24.537}, {"type": "recall_at_10", "value": 44.324999999999996}, {"type": "recall_at_100", "value": 65.949}, {"type": "recall_at_1000", "value": 84.017}, {"type": "recall_at_3", "value": 33.857}, {"type": "recall_at_5", "value": 37.316}, {"type": "map_at_1", "value": 17.122}, {"type": "map_at_10", "value": 24.32}, {"type": "map_at_100", "value": 25.338}, {"type": "map_at_1000", "value": 25.462}, {"type": "map_at_3", "value": 22.064}, {"type": "map_at_5", "value": 23.322000000000003}, {"type": "mrr_at_1", "value": 20.647}, {"type": "mrr_at_10", "value": 27.858}, {"type": "mrr_at_100", "value": 28.743999999999996}, {"type": "mrr_at_1000", "value": 28.819}, {"type": "mrr_at_3", "value": 25.769}, {"type": "mrr_at_5", "value": 26.964}, {"type": "ndcg_at_1", "value": 20.647}, {"type": "ndcg_at_10", "value": 28.849999999999998}, {"type": "ndcg_at_100", "value": 33.849000000000004}, {"type": "ndcg_at_1000", "value": 36.802}, {"type": "ndcg_at_3", "value": 24.799}, {"type": "ndcg_at_5", "value": 26.682}, {"type": "precision_at_1", "value": 20.647}, {"type": "precision_at_10", "value": 5.2170000000000005}, {"type": "precision_at_100", "value": 0.906}, {"type": "precision_at_1000", "value": 0.134}, {"type": "precision_at_3", "value": 11.769}, {"type": "precision_at_5", "value": 8.486}, {"type": "recall_at_1", "value": 17.122}, {"type": "recall_at_10", "value": 38.999}, {"type": "recall_at_100", "value": 61.467000000000006}, {"type": "recall_at_1000", "value": 82.716}, {"type": "recall_at_3", "value": 27.601}, {"type": "recall_at_5", "value": 32.471}, {"type": "map_at_1", "value": 24.396}, {"type": "map_at_10", "value": 33.415}, {"type": "map_at_100", "value": 34.521}, {"type": "map_at_1000", "value": 34.631}, {"type": "map_at_3", "value": 30.703999999999997}, {"type": "map_at_5", "value": 32.166}, {"type": "mrr_at_1", "value": 28.825}, {"type": "mrr_at_10", "value": 37.397000000000006}, {"type": "mrr_at_100", "value": 38.286}, {"type": "mrr_at_1000", "value": 38.346000000000004}, {"type": "mrr_at_3", "value": 35.028}, {"type": "mrr_at_5", "value": 36.32}, {"type": "ndcg_at_1", "value": 28.825}, {"type": "ndcg_at_10", "value": 38.656}, {"type": "ndcg_at_100", "value": 43.856}, {"type": "ndcg_at_1000", "value": 46.31}, {"type": "ndcg_at_3", "value": 33.793}, {"type": "ndcg_at_5", "value": 35.909}, {"type": "precision_at_1", "value": 28.825}, {"type": "precision_at_10", "value": 6.567}, {"type": "precision_at_100", "value": 1.0330000000000001}, {"type": "precision_at_1000", "value": 0.135}, {"type": "precision_at_3", "value": 15.516}, {"type": "precision_at_5", "value": 10.914}, {"type": "recall_at_1", "value": 24.396}, {"type": "recall_at_10", "value": 50.747}, {"type": "recall_at_100", "value": 73.477}, {"type": "recall_at_1000", "value": 90.801}, {"type": "recall_at_3", "value": 37.1}, {"type": "recall_at_5", "value": 42.589}, {"type": "map_at_1", "value": 25.072}, {"type": "map_at_10", "value": 34.307}, {"type": "map_at_100", "value": 35.725}, {"type": "map_at_1000", "value": 35.943999999999996}, {"type": "map_at_3", "value": 30.906}, {"type": "map_at_5", "value": 32.818000000000005}, {"type": "mrr_at_1", "value": 29.644}, {"type": "mrr_at_10", "value": 38.673}, {"type": "mrr_at_100", "value": 39.459}, {"type": "mrr_at_1000", "value": 39.527}, {"type": "mrr_at_3", "value": 35.771}, {"type": "mrr_at_5", "value": 37.332}, {"type": "ndcg_at_1", "value": 29.644}, {"type": "ndcg_at_10", "value": 40.548}, {"type": "ndcg_at_100", "value": 45.678999999999995}, {"type": "ndcg_at_1000", "value": 48.488}, {"type": "ndcg_at_3", "value": 34.887}, {"type": "ndcg_at_5", "value": 37.543}, {"type": "precision_at_1", "value": 29.644}, {"type": "precision_at_10", "value": 7.688000000000001}, {"type": "precision_at_100", "value": 1.482}, {"type": "precision_at_1000", "value": 0.23600000000000002}, {"type": "precision_at_3", "value": 16.206}, {"type": "precision_at_5", "value": 12.016}, {"type": "recall_at_1", "value": 25.072}, {"type": "recall_at_10", "value": 53.478}, {"type": "recall_at_100", "value": 76.07300000000001}, {"type": "recall_at_1000", "value": 93.884}, {"type": "recall_at_3", "value": 37.583}, {"type": "recall_at_5", "value": 44.464}, {"type": "map_at_1", "value": 20.712}, {"type": "map_at_10", "value": 27.467999999999996}, {"type": "map_at_100", "value": 28.502}, {"type": "map_at_1000", "value": 28.610000000000003}, {"type": "map_at_3", "value": 24.887999999999998}, {"type": "map_at_5", "value": 26.273999999999997}, {"type": "mrr_at_1", "value": 22.736}, {"type": "mrr_at_10", "value": 29.553}, {"type": "mrr_at_100", "value": 30.485}, {"type": "mrr_at_1000", "value": 30.56}, {"type": "mrr_at_3", "value": 27.078999999999997}, {"type": "mrr_at_5", "value": 28.401}, {"type": "ndcg_at_1", "value": 22.736}, {"type": "ndcg_at_10", "value": 32.023}, {"type": "ndcg_at_100", "value": 37.158}, {"type": "ndcg_at_1000", "value": 39.823}, {"type": "ndcg_at_3", "value": 26.951999999999998}, {"type": "ndcg_at_5", "value": 29.281000000000002}, {"type": "precision_at_1", "value": 22.736}, {"type": "precision_at_10", "value": 5.213}, {"type": "precision_at_100", "value": 0.832}, {"type": "precision_at_1000", "value": 0.116}, {"type": "precision_at_3", "value": 11.459999999999999}, {"type": "precision_at_5", "value": 8.244}, {"type": "recall_at_1", "value": 20.712}, {"type": "recall_at_10", "value": 44.057}, {"type": "recall_at_100", "value": 67.944}, {"type": "recall_at_1000", "value": 87.925}, {"type": "recall_at_3", "value": 30.305}, {"type": "recall_at_5", "value": 36.071999999999996}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 10.181999999999999}, {"type": "map_at_10", "value": 16.66}, {"type": "map_at_100", "value": 18.273}, {"type": "map_at_1000", "value": 18.45}, {"type": "map_at_3", "value": 14.141}, {"type": "map_at_5", "value": 15.455}, {"type": "mrr_at_1", "value": 22.15}, {"type": "mrr_at_10", "value": 32.062000000000005}, {"type": "mrr_at_100", "value": 33.116}, {"type": "mrr_at_1000", "value": 33.168}, {"type": "mrr_at_3", "value": 28.827}, {"type": "mrr_at_5", "value": 30.892999999999997}, {"type": "ndcg_at_1", "value": 22.15}, {"type": "ndcg_at_10", "value": 23.532}, {"type": "ndcg_at_100", "value": 30.358}, {"type": "ndcg_at_1000", "value": 33.783}, {"type": "ndcg_at_3", "value": 19.222}, {"type": "ndcg_at_5", "value": 20.919999999999998}, {"type": "precision_at_1", "value": 22.15}, {"type": "precision_at_10", "value": 7.185999999999999}, {"type": "precision_at_100", "value": 1.433}, {"type": "precision_at_1000", "value": 0.207}, {"type": "precision_at_3", "value": 13.941}, {"type": "precision_at_5", "value": 10.906}, {"type": "recall_at_1", "value": 10.181999999999999}, {"type": "recall_at_10", "value": 28.104000000000003}, {"type": "recall_at_100", "value": 51.998999999999995}, {"type": "recall_at_1000", "value": 71.311}, {"type": "recall_at_3", "value": 17.698}, {"type": "recall_at_5", "value": 22.262999999999998}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 6.669}, {"type": "map_at_10", "value": 15.552}, {"type": "map_at_100", "value": 21.865000000000002}, {"type": "map_at_1000", "value": 23.268}, {"type": "map_at_3", "value": 11.309}, {"type": "map_at_5", "value": 13.084000000000001}, {"type": "mrr_at_1", "value": 55.50000000000001}, {"type": "mrr_at_10", "value": 66.46600000000001}, {"type": "mrr_at_100", "value": 66.944}, {"type": "mrr_at_1000", "value": 66.956}, {"type": "mrr_at_3", "value": 64.542}, {"type": "mrr_at_5", "value": 65.717}, {"type": "ndcg_at_1", "value": 44.75}, {"type": "ndcg_at_10", "value": 35.049}, {"type": "ndcg_at_100", "value": 39.073}, {"type": "ndcg_at_1000", "value": 46.208}, {"type": "ndcg_at_3", "value": 39.525}, {"type": "ndcg_at_5", "value": 37.156}, {"type": "precision_at_1", "value": 55.50000000000001}, {"type": "precision_at_10", "value": 27.800000000000004}, {"type": "precision_at_100", "value": 9.013}, {"type": "precision_at_1000", "value": 1.8800000000000001}, {"type": "precision_at_3", "value": 42.667}, {"type": "precision_at_5", "value": 36.0}, {"type": "recall_at_1", "value": 6.669}, {"type": "recall_at_10", "value": 21.811}, {"type": "recall_at_100", "value": 45.112}, {"type": "recall_at_1000", "value": 67.806}, {"type": "recall_at_3", "value": 13.373}, {"type": "recall_at_5", "value": 16.615}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 48.769999999999996}, {"type": "f1", "value": 42.91448356376592}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 54.013}, {"type": "map_at_10", "value": 66.239}, {"type": "map_at_100", "value": 66.62599999999999}, {"type": "map_at_1000", "value": 66.644}, {"type": "map_at_3", "value": 63.965}, {"type": "map_at_5", "value": 65.45400000000001}, {"type": "mrr_at_1", "value": 58.221000000000004}, {"type": "mrr_at_10", "value": 70.43700000000001}, {"type": "mrr_at_100", "value": 70.744}, {"type": "mrr_at_1000", "value": 70.75099999999999}, {"type": "mrr_at_3", "value": 68.284}, {"type": "mrr_at_5", "value": 69.721}, {"type": "ndcg_at_1", "value": 58.221000000000004}, {"type": "ndcg_at_10", "value": 72.327}, {"type": "ndcg_at_100", "value": 73.953}, {"type": "ndcg_at_1000", "value": 74.312}, {"type": "ndcg_at_3", "value": 68.062}, {"type": "ndcg_at_5", "value": 70.56400000000001}, {"type": "precision_at_1", "value": 58.221000000000004}, {"type": "precision_at_10", "value": 9.521}, {"type": "precision_at_100", "value": 1.045}, {"type": "precision_at_1000", "value": 0.109}, {"type": "precision_at_3", "value": 27.348}, {"type": "precision_at_5", "value": 17.794999999999998}, {"type": "recall_at_1", "value": 54.013}, {"type": "recall_at_10", "value": 86.957}, {"type": "recall_at_100", "value": 93.911}, {"type": "recall_at_1000", "value": 96.38}, {"type": "recall_at_3", "value": 75.555}, {"type": "recall_at_5", "value": 81.671}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.254}, {"type": "map_at_10", "value": 33.723}, {"type": "map_at_100", "value": 35.574}, {"type": "map_at_1000", "value": 35.730000000000004}, {"type": "map_at_3", "value": 29.473}, {"type": "map_at_5", "value": 31.543}, {"type": "mrr_at_1", "value": 41.358}, {"type": "mrr_at_10", "value": 49.498}, {"type": "mrr_at_100", "value": 50.275999999999996}, {"type": "mrr_at_1000", "value": 50.308}, {"type": "mrr_at_3", "value": 47.016000000000005}, {"type": "mrr_at_5", "value": 48.336}, {"type": "ndcg_at_1", "value": 41.358}, {"type": "ndcg_at_10", "value": 41.579}, {"type": "ndcg_at_100", "value": 48.455}, {"type": "ndcg_at_1000", "value": 51.165000000000006}, {"type": "ndcg_at_3", "value": 37.681}, {"type": "ndcg_at_5", "value": 38.49}, {"type": "precision_at_1", "value": 41.358}, {"type": "precision_at_10", "value": 11.543000000000001}, {"type": "precision_at_100", "value": 1.87}, {"type": "precision_at_1000", "value": 0.23600000000000002}, {"type": "precision_at_3", "value": 24.743000000000002}, {"type": "precision_at_5", "value": 17.994}, {"type": "recall_at_1", "value": 21.254}, {"type": "recall_at_10", "value": 48.698}, {"type": "recall_at_100", "value": 74.588}, {"type": "recall_at_1000", "value": 91.00200000000001}, {"type": "recall_at_3", "value": 33.939}, {"type": "recall_at_5", "value": 39.367000000000004}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 35.922}, {"type": "map_at_10", "value": 52.32599999999999}, {"type": "map_at_100", "value": 53.18000000000001}, {"type": "map_at_1000", "value": 53.245}, {"type": "map_at_3", "value": 49.294}, {"type": "map_at_5", "value": 51.202999999999996}, {"type": "mrr_at_1", "value": 71.843}, {"type": "mrr_at_10", "value": 78.24600000000001}, {"type": "mrr_at_100", "value": 78.515}, {"type": "mrr_at_1000", "value": 78.527}, {"type": "mrr_at_3", "value": 77.17500000000001}, {"type": "mrr_at_5", "value": 77.852}, {"type": "ndcg_at_1", "value": 71.843}, {"type": "ndcg_at_10", "value": 61.379}, {"type": "ndcg_at_100", "value": 64.535}, {"type": "ndcg_at_1000", "value": 65.888}, {"type": "ndcg_at_3", "value": 56.958}, {"type": "ndcg_at_5", "value": 59.434}, {"type": "precision_at_1", "value": 71.843}, {"type": "precision_at_10", "value": 12.686}, {"type": "precision_at_100", "value": 1.517}, {"type": "precision_at_1000", "value": 0.16999999999999998}, {"type": "precision_at_3", "value": 35.778}, {"type": "precision_at_5", "value": 23.422}, {"type": "recall_at_1", "value": 35.922}, {"type": "recall_at_10", "value": 63.43}, {"type": "recall_at_100", "value": 75.868}, {"type": "recall_at_1000", "value": 84.88900000000001}, {"type": "recall_at_3", "value": 53.666000000000004}, {"type": "recall_at_5", "value": 58.555}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 79.4408}, {"type": "ap", "value": 73.52820871620366}, {"type": "f1", "value": 79.36240238685001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.826999999999998}, {"type": "map_at_10", "value": 34.04}, {"type": "map_at_100", "value": 35.226}, {"type": "map_at_1000", "value": 35.275}, {"type": "map_at_3", "value": 30.165999999999997}, {"type": "map_at_5", "value": 32.318000000000005}, {"type": "mrr_at_1", "value": 22.464000000000002}, {"type": "mrr_at_10", "value": 34.631}, {"type": "mrr_at_100", "value": 35.752}, {"type": "mrr_at_1000", "value": 35.795}, {"type": "mrr_at_3", "value": 30.798}, {"type": "mrr_at_5", "value": 32.946999999999996}, {"type": "ndcg_at_1", "value": 22.464000000000002}, {"type": "ndcg_at_10", "value": 40.919}, {"type": "ndcg_at_100", "value": 46.632}, {"type": "ndcg_at_1000", "value": 47.833}, {"type": "ndcg_at_3", "value": 32.992}, {"type": "ndcg_at_5", "value": 36.834}, {"type": "precision_at_1", "value": 22.464000000000002}, {"type": "precision_at_10", "value": 6.494}, {"type": "precision_at_100", "value": 0.9369999999999999}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 14.021}, {"type": "precision_at_5", "value": 10.347000000000001}, {"type": "recall_at_1", "value": 21.826999999999998}, {"type": "recall_at_10", "value": 62.132}, {"type": "recall_at_100", "value": 88.55199999999999}, {"type": "recall_at_1000", "value": 97.707}, {"type": "recall_at_3", "value": 40.541}, {"type": "recall_at_5", "value": 49.739}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 95.68399452804377}, {"type": "f1", "value": 95.25490609832268}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 83.15321477428182}, {"type": "f1", "value": 60.35476439087966}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 71.92669804976462}, {"type": "f1", "value": 69.22815107207565}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.4855413584398}, {"type": "f1", "value": 72.92107516103387}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 32.412679360205544}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 28.09211869875204}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 30.540919056982545}, {"type": "mrr", "value": 31.529904607063536}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 5.745}, {"type": "map_at_10", "value": 12.013}, {"type": "map_at_100", "value": 15.040000000000001}, {"type": "map_at_1000", "value": 16.427}, {"type": "map_at_3", "value": 8.841000000000001}, {"type": "map_at_5", "value": 10.289}, {"type": "mrr_at_1", "value": 45.201}, {"type": "mrr_at_10", "value": 53.483999999999995}, {"type": "mrr_at_100", "value": 54.20700000000001}, {"type": "mrr_at_1000", "value": 54.252}, {"type": "mrr_at_3", "value": 51.29}, {"type": "mrr_at_5", "value": 52.73}, {"type": "ndcg_at_1", "value": 43.808}, {"type": "ndcg_at_10", "value": 32.445}, {"type": "ndcg_at_100", "value": 30.031000000000002}, {"type": "ndcg_at_1000", "value": 39.007}, {"type": "ndcg_at_3", "value": 37.204}, {"type": "ndcg_at_5", "value": 35.07}, {"type": "precision_at_1", "value": 45.201}, {"type": "precision_at_10", "value": 23.684}, {"type": "precision_at_100", "value": 7.600999999999999}, {"type": "precision_at_1000", "value": 2.043}, {"type": "precision_at_3", "value": 33.953}, {"type": "precision_at_5", "value": 29.412}, {"type": "recall_at_1", "value": 5.745}, {"type": "recall_at_10", "value": 16.168}, {"type": "recall_at_100", "value": 30.875999999999998}, {"type": "recall_at_1000", "value": 62.686}, {"type": "recall_at_3", "value": 9.75}, {"type": "recall_at_5", "value": 12.413}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 37.828}, {"type": "map_at_10", "value": 53.239000000000004}, {"type": "map_at_100", "value": 54.035999999999994}, {"type": "map_at_1000", "value": 54.067}, {"type": "map_at_3", "value": 49.289}, {"type": "map_at_5", "value": 51.784}, {"type": "mrr_at_1", "value": 42.497}, {"type": "mrr_at_10", "value": 55.916999999999994}, {"type": "mrr_at_100", "value": 56.495}, {"type": "mrr_at_1000", "value": 56.516999999999996}, {"type": "mrr_at_3", "value": 52.800000000000004}, {"type": "mrr_at_5", "value": 54.722}, {"type": "ndcg_at_1", "value": 42.468}, {"type": "ndcg_at_10", "value": 60.437}, {"type": "ndcg_at_100", "value": 63.731}, {"type": "ndcg_at_1000", "value": 64.41799999999999}, {"type": "ndcg_at_3", "value": 53.230999999999995}, {"type": "ndcg_at_5", "value": 57.26}, {"type": "precision_at_1", "value": 42.468}, {"type": "precision_at_10", "value": 9.47}, {"type": "precision_at_100", "value": 1.1360000000000001}, {"type": "precision_at_1000", "value": 0.12}, {"type": "precision_at_3", "value": 23.724999999999998}, {"type": "precision_at_5", "value": 16.593}, {"type": "recall_at_1", "value": 37.828}, {"type": "recall_at_10", "value": 79.538}, {"type": "recall_at_100", "value": 93.646}, {"type": "recall_at_1000", "value": 98.72999999999999}, {"type": "recall_at_3", "value": 61.134}, {"type": "recall_at_5", "value": 70.377}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.548}, {"type": "map_at_10", "value": 84.466}, {"type": "map_at_100", "value": 85.10600000000001}, {"type": "map_at_1000", "value": 85.123}, {"type": "map_at_3", "value": 81.57600000000001}, {"type": "map_at_5", "value": 83.399}, {"type": "mrr_at_1", "value": 81.24}, {"type": "mrr_at_10", "value": 87.457}, {"type": "mrr_at_100", "value": 87.574}, {"type": "mrr_at_1000", "value": 87.575}, {"type": "mrr_at_3", "value": 86.507}, {"type": "mrr_at_5", "value": 87.205}, {"type": "ndcg_at_1", "value": 81.25}, {"type": "ndcg_at_10", "value": 88.203}, {"type": "ndcg_at_100", "value": 89.457}, {"type": "ndcg_at_1000", "value": 89.563}, {"type": "ndcg_at_3", "value": 85.465}, {"type": "ndcg_at_5", "value": 87.007}, {"type": "precision_at_1", "value": 81.25}, {"type": "precision_at_10", "value": 13.373}, {"type": "precision_at_100", "value": 1.5270000000000001}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.417}, {"type": "precision_at_5", "value": 24.556}, {"type": "recall_at_1", "value": 70.548}, {"type": "recall_at_10", "value": 95.208}, {"type": "recall_at_100", "value": 99.514}, {"type": "recall_at_1000", "value": 99.988}, {"type": "recall_at_3", "value": 87.214}, {"type": "recall_at_5", "value": 91.696}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 53.04822095496839}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 60.30778476474675}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.692}, {"type": "map_at_10", "value": 11.766}, {"type": "map_at_100", "value": 13.904}, {"type": "map_at_1000", "value": 14.216999999999999}, {"type": "map_at_3", "value": 8.245}, {"type": "map_at_5", "value": 9.92}, {"type": "mrr_at_1", "value": 23.0}, {"type": "mrr_at_10", "value": 33.78}, {"type": "mrr_at_100", "value": 34.922}, {"type": "mrr_at_1000", "value": 34.973}, {"type": "mrr_at_3", "value": 30.2}, {"type": "mrr_at_5", "value": 32.565}, {"type": "ndcg_at_1", "value": 23.0}, {"type": "ndcg_at_10", "value": 19.863}, {"type": "ndcg_at_100", "value": 28.141}, {"type": "ndcg_at_1000", "value": 33.549}, {"type": "ndcg_at_3", "value": 18.434}, {"type": "ndcg_at_5", "value": 16.384}, {"type": "precision_at_1", "value": 23.0}, {"type": "precision_at_10", "value": 10.39}, {"type": "precision_at_100", "value": 2.235}, {"type": "precision_at_1000", "value": 0.35300000000000004}, {"type": "precision_at_3", "value": 17.133000000000003}, {"type": "precision_at_5", "value": 14.44}, {"type": "recall_at_1", "value": 4.692}, {"type": "recall_at_10", "value": 21.025}, {"type": "recall_at_100", "value": 45.324999999999996}, {"type": "recall_at_1000", "value": 71.675}, {"type": "recall_at_3", "value": 10.440000000000001}, {"type": "recall_at_5", "value": 14.64}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.96178184892842}, {"type": "cos_sim_spearman", "value": 79.6487740813199}, {"type": "euclidean_pearson", "value": 82.06661161625023}, {"type": "euclidean_spearman", "value": 79.64876769031183}, {"type": "manhattan_pearson", "value": 82.07061164575131}, {"type": "manhattan_spearman", "value": 79.65197039464537}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.15305604100027}, {"type": "cos_sim_spearman", "value": 74.27447427941591}, {"type": "euclidean_pearson", "value": 80.52737337565307}, {"type": "euclidean_spearman", "value": 74.27416077132192}, {"type": "manhattan_pearson", "value": 80.53728571140387}, {"type": "manhattan_spearman", "value": 74.28853605753457}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.44386080639279}, {"type": "cos_sim_spearman", "value": 84.17947648159536}, {"type": "euclidean_pearson", "value": 83.34145388129387}, {"type": "euclidean_spearman", "value": 84.17947648159536}, {"type": "manhattan_pearson", "value": 83.30699061927966}, {"type": "manhattan_spearman", "value": 84.18125737380451}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.57392220985612}, {"type": "cos_sim_spearman", "value": 78.80745014464101}, {"type": "euclidean_pearson", "value": 80.01660371487199}, {"type": "euclidean_spearman", "value": 78.80741240102256}, {"type": "manhattan_pearson", "value": 79.96810779507953}, {"type": "manhattan_spearman", "value": 78.75600400119448}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.85421063026625}, {"type": "cos_sim_spearman", "value": 87.55320285299192}, {"type": "euclidean_pearson", "value": 86.69750143323517}, {"type": "euclidean_spearman", "value": 87.55320284326378}, {"type": "manhattan_pearson", "value": 86.63379169960379}, {"type": "manhattan_spearman", "value": 87.4815029877984}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.31314130411842}, {"type": "cos_sim_spearman", "value": 85.3489588181433}, {"type": "euclidean_pearson", "value": 84.13240933463535}, {"type": "euclidean_spearman", "value": 85.34902871403281}, {"type": "manhattan_pearson", "value": 84.01183086503559}, {"type": "manhattan_spearman", "value": 85.19316703166102}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.09979781689536}, {"type": "cos_sim_spearman", "value": 88.87813323759015}, {"type": "euclidean_pearson", "value": 88.65413031123792}, {"type": "euclidean_spearman", "value": 88.87813323759015}, {"type": "manhattan_pearson", "value": 88.61818758256024}, {"type": "manhattan_spearman", "value": 88.81044100494604}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 62.30693258111531}, {"type": "cos_sim_spearman", "value": 62.195516523251946}, {"type": "euclidean_pearson", "value": 62.951283701049476}, {"type": "euclidean_spearman", "value": 62.195516523251946}, {"type": "manhattan_pearson", "value": 63.068322281439535}, {"type": "manhattan_spearman", "value": 62.10621171028406}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.27092833763909}, {"type": "cos_sim_spearman", "value": 84.84429717949759}, {"type": "euclidean_pearson", "value": 84.8516966060792}, {"type": "euclidean_spearman", "value": 84.84429717949759}, {"type": "manhattan_pearson", "value": 84.82203139242881}, {"type": "manhattan_spearman", "value": 84.8358503952945}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 83.10290863981409}, {"type": "mrr", "value": 95.31168450286097}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 52.161}, {"type": "map_at_10", "value": 62.138000000000005}, {"type": "map_at_100", "value": 62.769}, {"type": "map_at_1000", "value": 62.812}, {"type": "map_at_3", "value": 59.111000000000004}, {"type": "map_at_5", "value": 60.995999999999995}, {"type": "mrr_at_1", "value": 55.333}, {"type": "mrr_at_10", "value": 63.504000000000005}, {"type": "mrr_at_100", "value": 64.036}, {"type": "mrr_at_1000", "value": 64.08}, {"type": "mrr_at_3", "value": 61.278}, {"type": "mrr_at_5", "value": 62.778}, {"type": "ndcg_at_1", "value": 55.333}, {"type": "ndcg_at_10", "value": 66.678}, {"type": "ndcg_at_100", "value": 69.415}, {"type": "ndcg_at_1000", "value": 70.453}, {"type": "ndcg_at_3", "value": 61.755}, {"type": "ndcg_at_5", "value": 64.546}, {"type": "precision_at_1", "value": 55.333}, {"type": "precision_at_10", "value": 9.033}, {"type": "precision_at_100", "value": 1.043}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 24.221999999999998}, {"type": "precision_at_5", "value": 16.333000000000002}, {"type": "recall_at_1", "value": 52.161}, {"type": "recall_at_10", "value": 79.156}, {"type": "recall_at_100", "value": 91.333}, {"type": "recall_at_1000", "value": 99.333}, {"type": "recall_at_3", "value": 66.43299999999999}, {"type": "recall_at_5", "value": 73.272}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.81287128712871}, {"type": "cos_sim_ap", "value": 95.30034785910676}, {"type": "cos_sim_f1", "value": 90.28629856850716}, {"type": "cos_sim_precision", "value": 92.36401673640168}, {"type": "cos_sim_recall", "value": 88.3}, {"type": "dot_accuracy", "value": 99.81287128712871}, {"type": "dot_ap", "value": 95.30034785910676}, {"type": "dot_f1", "value": 90.28629856850716}, {"type": "dot_precision", "value": 92.36401673640168}, {"type": "dot_recall", "value": 88.3}, {"type": "euclidean_accuracy", "value": 99.81287128712871}, {"type": "euclidean_ap", "value": 95.30034785910676}, {"type": "euclidean_f1", "value": 90.28629856850716}, {"type": "euclidean_precision", "value": 92.36401673640168}, {"type": "euclidean_recall", "value": 88.3}, {"type": "manhattan_accuracy", "value": 99.80990099009901}, {"type": "manhattan_ap", "value": 95.26880751950654}, {"type": "manhattan_f1", "value": 90.22177419354838}, {"type": "manhattan_precision", "value": 90.95528455284553}, {"type": "manhattan_recall", "value": 89.5}, {"type": "max_accuracy", "value": 99.81287128712871}, {"type": "max_ap", "value": 95.30034785910676}, {"type": "max_f1", "value": 90.28629856850716}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 58.518662504351184}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 34.96168178378587}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 52.04862593471896}, {"type": "mrr", "value": 52.97238402936932}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.092545236479946}, {"type": "cos_sim_spearman", "value": 31.599851000175498}, {"type": "dot_pearson", "value": 30.092542723901676}, {"type": "dot_spearman", "value": 31.599851000175498}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.189}, {"type": "map_at_10", "value": 1.662}, {"type": "map_at_100", "value": 9.384}, {"type": "map_at_1000", "value": 22.669}, {"type": "map_at_3", "value": 0.5559999999999999}, {"type": "map_at_5", "value": 0.9039999999999999}, {"type": "mrr_at_1", "value": 68.0}, {"type": "mrr_at_10", "value": 81.01899999999999}, {"type": "mrr_at_100", "value": 81.01899999999999}, {"type": "mrr_at_1000", "value": 81.01899999999999}, {"type": "mrr_at_3", "value": 79.333}, {"type": "mrr_at_5", "value": 80.733}, {"type": "ndcg_at_1", "value": 63.0}, {"type": "ndcg_at_10", "value": 65.913}, {"type": "ndcg_at_100", "value": 51.895}, {"type": "ndcg_at_1000", "value": 46.967}, {"type": "ndcg_at_3", "value": 65.49199999999999}, {"type": "ndcg_at_5", "value": 66.69699999999999}, {"type": "precision_at_1", "value": 68.0}, {"type": "precision_at_10", "value": 71.6}, {"type": "precision_at_100", "value": 53.66}, {"type": "precision_at_1000", "value": 21.124000000000002}, {"type": "precision_at_3", "value": 72.667}, {"type": "precision_at_5", "value": 74.0}, {"type": "recall_at_1", "value": 0.189}, {"type": "recall_at_10", "value": 1.913}, {"type": "recall_at_100", "value": 12.601999999999999}, {"type": "recall_at_1000", "value": 44.296}, {"type": "recall_at_3", "value": 0.605}, {"type": "recall_at_5", "value": 1.018}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.701}, {"type": "map_at_10", "value": 10.445}, {"type": "map_at_100", "value": 17.324}, {"type": "map_at_1000", "value": 19.161}, {"type": "map_at_3", "value": 5.497}, {"type": "map_at_5", "value": 7.278}, {"type": "mrr_at_1", "value": 30.612000000000002}, {"type": "mrr_at_10", "value": 45.534}, {"type": "mrr_at_100", "value": 45.792}, {"type": "mrr_at_1000", "value": 45.806999999999995}, {"type": "mrr_at_3", "value": 37.755}, {"type": "mrr_at_5", "value": 43.469}, {"type": "ndcg_at_1", "value": 26.531}, {"type": "ndcg_at_10", "value": 26.235000000000003}, {"type": "ndcg_at_100", "value": 39.17}, {"type": "ndcg_at_1000", "value": 51.038}, {"type": "ndcg_at_3", "value": 23.625}, {"type": "ndcg_at_5", "value": 24.338}, {"type": "precision_at_1", "value": 30.612000000000002}, {"type": "precision_at_10", "value": 24.285999999999998}, {"type": "precision_at_100", "value": 8.224}, {"type": "precision_at_1000", "value": 1.6179999999999999}, {"type": "precision_at_3", "value": 24.490000000000002}, {"type": "precision_at_5", "value": 24.898}, {"type": "recall_at_1", "value": 2.701}, {"type": "recall_at_10", "value": 17.997}, {"type": "recall_at_100", "value": 51.766999999999996}, {"type": "recall_at_1000", "value": 87.863}, {"type": "recall_at_3", "value": 6.295000000000001}, {"type": "recall_at_5", "value": 9.993}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 73.3474}, {"type": "ap", "value": 15.393431414459924}, {"type": "f1", "value": 56.466681887882416}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 62.062818336163}, {"type": "f1", "value": 62.11230840463252}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 42.464892820845115}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.15962329379508}, {"type": "cos_sim_ap", "value": 74.73674057919256}, {"type": "cos_sim_f1", "value": 68.81245642574947}, {"type": "cos_sim_precision", "value": 61.48255813953488}, {"type": "cos_sim_recall", "value": 78.12664907651715}, {"type": "dot_accuracy", "value": 86.15962329379508}, {"type": "dot_ap", "value": 74.7367634988281}, {"type": "dot_f1", "value": 68.81245642574947}, {"type": "dot_precision", "value": 61.48255813953488}, {"type": "dot_recall", "value": 78.12664907651715}, {"type": "euclidean_accuracy", "value": 86.15962329379508}, {"type": "euclidean_ap", "value": 74.7367761466634}, {"type": "euclidean_f1", "value": 68.81245642574947}, {"type": "euclidean_precision", "value": 61.48255813953488}, {"type": "euclidean_recall", "value": 78.12664907651715}, {"type": "manhattan_accuracy", "value": 86.21326816474935}, {"type": "manhattan_ap", "value": 74.64416473733951}, {"type": "manhattan_f1", "value": 68.80924855491331}, {"type": "manhattan_precision", "value": 61.23456790123457}, {"type": "manhattan_recall", "value": 78.52242744063325}, {"type": "max_accuracy", "value": 86.21326816474935}, {"type": "max_ap", "value": 74.7367761466634}, {"type": "max_f1", "value": 68.81245642574947}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.97620988085536}, {"type": "cos_sim_ap", "value": 86.08680845745758}, {"type": "cos_sim_f1", "value": 78.02793637114438}, {"type": "cos_sim_precision", "value": 73.11082699683736}, {"type": "cos_sim_recall", "value": 83.65414228518632}, {"type": "dot_accuracy", "value": 88.97620988085536}, {"type": "dot_ap", "value": 86.08681149437946}, {"type": "dot_f1", "value": 78.02793637114438}, {"type": "dot_precision", "value": 73.11082699683736}, {"type": "dot_recall", "value": 83.65414228518632}, {"type": "euclidean_accuracy", "value": 88.97620988085536}, {"type": "euclidean_ap", "value": 86.08681215460771}, {"type": "euclidean_f1", "value": 78.02793637114438}, {"type": "euclidean_precision", "value": 73.11082699683736}, {"type": "euclidean_recall", "value": 83.65414228518632}, {"type": "manhattan_accuracy", "value": 88.88888888888889}, {"type": "manhattan_ap", "value": 86.02916327562438}, {"type": "manhattan_f1", "value": 78.02063045516843}, {"type": "manhattan_precision", "value": 73.38851947346994}, {"type": "manhattan_recall", "value": 83.2768709578072}, {"type": "max_accuracy", "value": 88.97620988085536}, {"type": "max_ap", "value": 86.08681215460771}, {"type": "max_f1", "value": 78.02793637114438}]}]}]} | TitanML/jina-v2-base-en-embed | null | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"custom_code",
"en",
"dataset:allenai/c4",
"arxiv:2108.12409",
"arxiv:2310.19923",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-04-24T13:44:18+00:00 | [
"2108.12409",
"2310.19923"
] | [
"en"
] | TAGS
#sentence-transformers #pytorch #safetensors #bert #feature-extraction #sentence-similarity #mteb #custom_code #en #dataset-allenai/c4 #arxiv-2108.12409 #arxiv-2310.19923 #license-apache-2.0 #model-index #region-us
|
<br><br>
<p align="center">
<img src="URL/URL alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="URL AI</b></a>.</b>
</p>
## Quick Start
The easiest way to starting using 'jina-embeddings-v2-base-en' is to use Jina AI's Embedding API.
## Intended Usage & Model Info
'jina-embeddings-v2-base-en' is an English, monolingual embedding model supporting 8192 sequence length.
It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of ALiBi to allow longer sequence length.
The backbone 'jina-bert-v2-base-en' is pretrained on the C4 dataset.
The model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives.
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi.
This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc.
With a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference.
Additionally, we provide the following embedding models:
- 'jina-embeddings-v2-small-en': 33 million parameters.
- 'jina-embeddings-v2-base-en': 137 million parameters (you are here).
- 'jina-embeddings-v2-base-zh': Chinese-English Bilingual embeddings.
- 'jina-embeddings-v2-base-de': German-English Bilingual embeddings.
- 'jina-embeddings-v2-base-es': Spanish-English Bilingual embeddings.
## Data & Parameters
Jina Embeddings V2 technical report
## Usage
<details><summary>Please apply mean pooling when integrating the model.</summary>
<p>
### Why mean pooling?
'mean poooling' takes all token embeddings from model output and averaging them at sentence/paragraph level.
It has been proved to be the most effective way to produce high-quality sentence embeddings.
We offer an 'encode' function to deal with this.
However, if you would like to do it without using the default 'encode' function:
</p>
</details>
You can use Jina Embedding models directly from transformers package.
First, you need to make sure that you are logged into huggingface. You can either use the huggingface-cli tool (after installing the 'transformers' package) and pass your hugginface access token:
Alternatively, you can provide the access token as an environment variable in the shell:
or in Python:
Then, you can use load and use the model via the 'AutoModel' class:
If you only want to handle shorter sequence, such as 2k, pass the 'max_length' parameter to the 'encode' function:
Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well):
## Alternatives to Using Transformers (or SentencTransformers) Package
1. _Managed SaaS_: Get started with a free key on Jina AI's Embedding API.
2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on AWS Sagemaker.
## Use Jina Embeddings for RAG
According to the latest blog post from LLamaIndex,
> In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out.
<img src="URL width="780px">
## Plans
1. Bilingual embedding models supporting more European & Asian languages, including Spanish, French, Italian and Japanese.
2. Multimodal embedding models enable Multimodal RAG applications.
3. High-performt rerankers.
## Trouble Shooting
Loading of Model Code failed
If you forgot to pass the 'trust_remote_code=True' flag when calling 'AutoModel.from_pretrained' or initializing the model via the 'SentenceTransformer' class, you will receive an error that the model weights could not be initialized.
This is caused by tranformers falling back to creating a default BERT model, instead of a jina-embedding model:
User is not logged into Huggingface
The model is only availabe under gated access.
This means you need to be logged into huggingface load load it.
If you receive the following error, you need to provide an access token, either by using the huggingface-cli or providing the token via an environment variable as described above:
## Contact
Join our Discord community and chat with other community members about ideas.
If you find Jina Embeddings useful in your research, please cite the following paper:
| [
"## Quick Start\n\nThe easiest way to starting using 'jina-embeddings-v2-base-en' is to use Jina AI's Embedding API.",
"## Intended Usage & Model Info\n\n'jina-embeddings-v2-base-en' is an English, monolingual embedding model supporting 8192 sequence length.\nIt is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of ALiBi to allow longer sequence length.\nThe backbone 'jina-bert-v2-base-en' is pretrained on the C4 dataset.\nThe model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives.\nThese pairs were obtained from various domains and were carefully selected through a thorough cleaning process.\n\nThe embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi.\nThis makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc.\n\nWith a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference.\nAdditionally, we provide the following embedding models:\n\n- 'jina-embeddings-v2-small-en': 33 million parameters.\n- 'jina-embeddings-v2-base-en': 137 million parameters (you are here).\n- 'jina-embeddings-v2-base-zh': Chinese-English Bilingual embeddings.\n- 'jina-embeddings-v2-base-de': German-English Bilingual embeddings.\n- 'jina-embeddings-v2-base-es': Spanish-English Bilingual embeddings.",
"## Data & Parameters\n\nJina Embeddings V2 technical report",
"## Usage\n\n<details><summary>Please apply mean pooling when integrating the model.</summary>\n<p>",
"### Why mean pooling?\n\n'mean poooling' takes all token embeddings from model output and averaging them at sentence/paragraph level.\nIt has been proved to be the most effective way to produce high-quality sentence embeddings.\nWe offer an 'encode' function to deal with this.\n\nHowever, if you would like to do it without using the default 'encode' function:\n\n\n\n</p>\n</details>\n\nYou can use Jina Embedding models directly from transformers package.\n\nFirst, you need to make sure that you are logged into huggingface. You can either use the huggingface-cli tool (after installing the 'transformers' package) and pass your hugginface access token:\n\nAlternatively, you can provide the access token as an environment variable in the shell:\n\nor in Python:\n\n\nThen, you can use load and use the model via the 'AutoModel' class:\n\n\n\nIf you only want to handle shorter sequence, such as 2k, pass the 'max_length' parameter to the 'encode' function:\n\n\n\nUsing the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well):",
"## Alternatives to Using Transformers (or SentencTransformers) Package\n\n1. _Managed SaaS_: Get started with a free key on Jina AI's Embedding API. \n2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on AWS Sagemaker.",
"## Use Jina Embeddings for RAG\n\nAccording to the latest blog post from LLamaIndex,\n\n> In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out.\n\n<img src=\"URL width=\"780px\">",
"## Plans\n\n1. Bilingual embedding models supporting more European & Asian languages, including Spanish, French, Italian and Japanese.\n2. Multimodal embedding models enable Multimodal RAG applications.\n3. High-performt rerankers.",
"## Trouble Shooting\n\nLoading of Model Code failed\n\nIf you forgot to pass the 'trust_remote_code=True' flag when calling 'AutoModel.from_pretrained' or initializing the model via the 'SentenceTransformer' class, you will receive an error that the model weights could not be initialized.\nThis is caused by tranformers falling back to creating a default BERT model, instead of a jina-embedding model:\n\n\n\n\nUser is not logged into Huggingface\n\nThe model is only availabe under gated access.\nThis means you need to be logged into huggingface load load it.\nIf you receive the following error, you need to provide an access token, either by using the huggingface-cli or providing the token via an environment variable as described above:",
"## Contact\n\nJoin our Discord community and chat with other community members about ideas.\n\nIf you find Jina Embeddings useful in your research, please cite the following paper:"
] | [
"TAGS\n#sentence-transformers #pytorch #safetensors #bert #feature-extraction #sentence-similarity #mteb #custom_code #en #dataset-allenai/c4 #arxiv-2108.12409 #arxiv-2310.19923 #license-apache-2.0 #model-index #region-us \n",
"## Quick Start\n\nThe easiest way to starting using 'jina-embeddings-v2-base-en' is to use Jina AI's Embedding API.",
"## Intended Usage & Model Info\n\n'jina-embeddings-v2-base-en' is an English, monolingual embedding model supporting 8192 sequence length.\nIt is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of ALiBi to allow longer sequence length.\nThe backbone 'jina-bert-v2-base-en' is pretrained on the C4 dataset.\nThe model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives.\nThese pairs were obtained from various domains and were carefully selected through a thorough cleaning process.\n\nThe embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi.\nThis makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc.\n\nWith a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference.\nAdditionally, we provide the following embedding models:\n\n- 'jina-embeddings-v2-small-en': 33 million parameters.\n- 'jina-embeddings-v2-base-en': 137 million parameters (you are here).\n- 'jina-embeddings-v2-base-zh': Chinese-English Bilingual embeddings.\n- 'jina-embeddings-v2-base-de': German-English Bilingual embeddings.\n- 'jina-embeddings-v2-base-es': Spanish-English Bilingual embeddings.",
"## Data & Parameters\n\nJina Embeddings V2 technical report",
"## Usage\n\n<details><summary>Please apply mean pooling when integrating the model.</summary>\n<p>",
"### Why mean pooling?\n\n'mean poooling' takes all token embeddings from model output and averaging them at sentence/paragraph level.\nIt has been proved to be the most effective way to produce high-quality sentence embeddings.\nWe offer an 'encode' function to deal with this.\n\nHowever, if you would like to do it without using the default 'encode' function:\n\n\n\n</p>\n</details>\n\nYou can use Jina Embedding models directly from transformers package.\n\nFirst, you need to make sure that you are logged into huggingface. You can either use the huggingface-cli tool (after installing the 'transformers' package) and pass your hugginface access token:\n\nAlternatively, you can provide the access token as an environment variable in the shell:\n\nor in Python:\n\n\nThen, you can use load and use the model via the 'AutoModel' class:\n\n\n\nIf you only want to handle shorter sequence, such as 2k, pass the 'max_length' parameter to the 'encode' function:\n\n\n\nUsing the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well):",
"## Alternatives to Using Transformers (or SentencTransformers) Package\n\n1. _Managed SaaS_: Get started with a free key on Jina AI's Embedding API. \n2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on AWS Sagemaker.",
"## Use Jina Embeddings for RAG\n\nAccording to the latest blog post from LLamaIndex,\n\n> In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out.\n\n<img src=\"URL width=\"780px\">",
"## Plans\n\n1. Bilingual embedding models supporting more European & Asian languages, including Spanish, French, Italian and Japanese.\n2. Multimodal embedding models enable Multimodal RAG applications.\n3. High-performt rerankers.",
"## Trouble Shooting\n\nLoading of Model Code failed\n\nIf you forgot to pass the 'trust_remote_code=True' flag when calling 'AutoModel.from_pretrained' or initializing the model via the 'SentenceTransformer' class, you will receive an error that the model weights could not be initialized.\nThis is caused by tranformers falling back to creating a default BERT model, instead of a jina-embedding model:\n\n\n\n\nUser is not logged into Huggingface\n\nThe model is only availabe under gated access.\nThis means you need to be logged into huggingface load load it.\nIf you receive the following error, you need to provide an access token, either by using the huggingface-cli or providing the token via an environment variable as described above:",
"## Contact\n\nJoin our Discord community and chat with other community members about ideas.\n\nIf you find Jina Embeddings useful in your research, please cite the following paper:"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/resolve/main/thumbnail.png" alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 7B β
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so. You can find more details in the [technical report](https://arxiv.org/abs/2310.16944).
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat
- **Chatbot Arena:** Evaluate Zephyr 7B against 10+ LLMs in the LMSYS arena: http://arena.lmsys.org
## Performance
At the time of release, Zephyr-7B-β is the highest ranked 7B chat model on the [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks:
| Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
|-------------|-----|----|---------------|--------------|
| StableLM-Tuned-α | 7B| dSFT |2.75| -|
| MPT-Chat | 7B |dSFT |5.42| -|
| Xwin-LMv0.1 | 7B| dPPO| 6.19| 87.83|
| Mistral-Instructv0.1 | 7B| - | 6.84 |-|
| Zephyr-7b-α |7B| dDPO| 6.88| -|
| **Zephyr-7b-β** 🪁 | **7B** | **dDPO** | **7.34** | **90.60** |
| Falcon-Instruct | 40B |dSFT |5.17 |45.71|
| Guanaco | 65B | SFT |6.41| 71.80|
| Llama2-Chat | 70B |RLHF |6.86| 92.66|
| Vicuna v1.3 | 33B |dSFT |7.12 |88.99|
| WizardLM v1.0 | 70B |dSFT |7.71 |-|
| Xwin-LM v0.1 | 70B |dPPO |- |95.57|
| GPT-3.5-turbo | - |RLHF |7.94 |89.37|
| Claude 2 | - |RLHF |8.06| 91.36|
| GPT-4 | -| RLHF |8.99| 95.28|
In particular, on several categories of MT-Bench, Zephyr-7B-β has strong performance compared to larger open models like Llama2-Chat-70B:

However, on more complex tasks like coding and mathematics, Zephyr-7B-β lags behind proprietary models and more research is needed to close the gap.
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities.
You can find the datasets used for training Zephyr-7B-β [here](https://huggingface.co/collections/HuggingFaceH4/zephyr-7b-6538c6d6d5ddd1cbb1744a66)
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-beta", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
## Training and evaluation data
During DPO training, this model achieves the following results on the evaluation set:
- Loss: 0.7496
- Rewards/chosen: -4.5221
- Rewards/rejected: -8.3184
- Rewards/accuracies: 0.7812
- Rewards/margins: 3.7963
- Logps/rejected: -340.1541
- Logps/chosen: -299.4561
- Logits/rejected: -2.3081
- Logits/chosen: -2.3531
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
The table below shows the full set of DPO training metrics:
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6284 | 0.05 | 100 | 0.6098 | 0.0425 | -0.1872 | 0.7344 | 0.2297 | -258.8416 | -253.8099 | -2.7976 | -2.8234 |
| 0.4908 | 0.1 | 200 | 0.5426 | -0.0279 | -0.6842 | 0.75 | 0.6563 | -263.8124 | -254.5145 | -2.7719 | -2.7960 |
| 0.5264 | 0.15 | 300 | 0.5324 | 0.0414 | -0.9793 | 0.7656 | 1.0207 | -266.7627 | -253.8209 | -2.7892 | -2.8122 |
| 0.5536 | 0.21 | 400 | 0.4957 | -0.0185 | -1.5276 | 0.7969 | 1.5091 | -272.2460 | -254.4203 | -2.8542 | -2.8764 |
| 0.5362 | 0.26 | 500 | 0.5031 | -0.2630 | -1.5917 | 0.7812 | 1.3287 | -272.8869 | -256.8653 | -2.8702 | -2.8958 |
| 0.5966 | 0.31 | 600 | 0.5963 | -0.2993 | -1.6491 | 0.7812 | 1.3499 | -273.4614 | -257.2279 | -2.8778 | -2.8986 |
| 0.5014 | 0.36 | 700 | 0.5382 | -0.2859 | -1.4750 | 0.75 | 1.1891 | -271.7204 | -257.0942 | -2.7659 | -2.7869 |
| 0.5334 | 0.41 | 800 | 0.5677 | -0.4289 | -1.8968 | 0.7969 | 1.4679 | -275.9378 | -258.5242 | -2.7053 | -2.7265 |
| 0.5251 | 0.46 | 900 | 0.5772 | -0.2116 | -1.3107 | 0.7344 | 1.0991 | -270.0768 | -256.3507 | -2.8463 | -2.8662 |
| 0.5205 | 0.52 | 1000 | 0.5262 | -0.3792 | -1.8585 | 0.7188 | 1.4793 | -275.5552 | -258.0276 | -2.7893 | -2.7979 |
| 0.5094 | 0.57 | 1100 | 0.5433 | -0.6279 | -1.9368 | 0.7969 | 1.3089 | -276.3377 | -260.5136 | -2.7453 | -2.7536 |
| 0.5837 | 0.62 | 1200 | 0.5349 | -0.3780 | -1.9584 | 0.7656 | 1.5804 | -276.5542 | -258.0154 | -2.7643 | -2.7756 |
| 0.5214 | 0.67 | 1300 | 0.5732 | -1.0055 | -2.2306 | 0.7656 | 1.2251 | -279.2761 | -264.2903 | -2.6986 | -2.7113 |
| 0.6914 | 0.72 | 1400 | 0.5137 | -0.6912 | -2.1775 | 0.7969 | 1.4863 | -278.7448 | -261.1467 | -2.7166 | -2.7275 |
| 0.4655 | 0.77 | 1500 | 0.5090 | -0.7987 | -2.2930 | 0.7031 | 1.4943 | -279.8999 | -262.2220 | -2.6651 | -2.6838 |
| 0.5731 | 0.83 | 1600 | 0.5312 | -0.8253 | -2.3520 | 0.7812 | 1.5268 | -280.4902 | -262.4876 | -2.6543 | -2.6728 |
| 0.5233 | 0.88 | 1700 | 0.5206 | -0.4573 | -2.0951 | 0.7812 | 1.6377 | -277.9205 | -258.8084 | -2.6870 | -2.7097 |
| 0.5593 | 0.93 | 1800 | 0.5231 | -0.5508 | -2.2000 | 0.7969 | 1.6492 | -278.9703 | -259.7433 | -2.6221 | -2.6519 |
| 0.4967 | 0.98 | 1900 | 0.5290 | -0.5340 | -1.9570 | 0.8281 | 1.4230 | -276.5395 | -259.5749 | -2.6564 | -2.6878 |
| 0.0921 | 1.03 | 2000 | 0.5368 | -1.1376 | -3.1615 | 0.7812 | 2.0239 | -288.5854 | -265.6111 | -2.6040 | -2.6345 |
| 0.0733 | 1.08 | 2100 | 0.5453 | -1.1045 | -3.4451 | 0.7656 | 2.3406 | -291.4208 | -265.2799 | -2.6289 | -2.6595 |
| 0.0972 | 1.14 | 2200 | 0.5571 | -1.6915 | -3.9823 | 0.8125 | 2.2908 | -296.7934 | -271.1505 | -2.6471 | -2.6709 |
| 0.1058 | 1.19 | 2300 | 0.5789 | -1.0621 | -3.8941 | 0.7969 | 2.8319 | -295.9106 | -264.8563 | -2.5527 | -2.5798 |
| 0.2423 | 1.24 | 2400 | 0.5455 | -1.1963 | -3.5590 | 0.7812 | 2.3627 | -292.5599 | -266.1981 | -2.5414 | -2.5784 |
| 0.1177 | 1.29 | 2500 | 0.5889 | -1.8141 | -4.3942 | 0.7969 | 2.5801 | -300.9120 | -272.3761 | -2.4802 | -2.5189 |
| 0.1213 | 1.34 | 2600 | 0.5683 | -1.4608 | -3.8420 | 0.8125 | 2.3812 | -295.3901 | -268.8436 | -2.4774 | -2.5207 |
| 0.0889 | 1.39 | 2700 | 0.5890 | -1.6007 | -3.7337 | 0.7812 | 2.1330 | -294.3068 | -270.2423 | -2.4123 | -2.4522 |
| 0.0995 | 1.45 | 2800 | 0.6073 | -1.5519 | -3.8362 | 0.8281 | 2.2843 | -295.3315 | -269.7538 | -2.4685 | -2.5050 |
| 0.1145 | 1.5 | 2900 | 0.5790 | -1.7939 | -4.2876 | 0.8438 | 2.4937 | -299.8461 | -272.1744 | -2.4272 | -2.4674 |
| 0.0644 | 1.55 | 3000 | 0.5735 | -1.7285 | -4.2051 | 0.8125 | 2.4766 | -299.0209 | -271.5201 | -2.4193 | -2.4574 |
| 0.0798 | 1.6 | 3100 | 0.5537 | -1.7226 | -4.2850 | 0.8438 | 2.5624 | -299.8200 | -271.4610 | -2.5367 | -2.5696 |
| 0.1013 | 1.65 | 3200 | 0.5575 | -1.5715 | -3.9813 | 0.875 | 2.4098 | -296.7825 | -269.9498 | -2.4926 | -2.5267 |
| 0.1254 | 1.7 | 3300 | 0.5905 | -1.6412 | -4.4703 | 0.8594 | 2.8291 | -301.6730 | -270.6473 | -2.5017 | -2.5340 |
| 0.085 | 1.76 | 3400 | 0.6133 | -1.9159 | -4.6760 | 0.8438 | 2.7601 | -303.7296 | -273.3941 | -2.4614 | -2.4960 |
| 0.065 | 1.81 | 3500 | 0.6074 | -1.8237 | -4.3525 | 0.8594 | 2.5288 | -300.4951 | -272.4724 | -2.4597 | -2.5004 |
| 0.0755 | 1.86 | 3600 | 0.5836 | -1.9252 | -4.4005 | 0.8125 | 2.4753 | -300.9748 | -273.4872 | -2.4327 | -2.4716 |
| 0.0746 | 1.91 | 3700 | 0.5789 | -1.9280 | -4.4906 | 0.8125 | 2.5626 | -301.8762 | -273.5149 | -2.4686 | -2.5115 |
| 0.1348 | 1.96 | 3800 | 0.6015 | -1.8658 | -4.2428 | 0.8281 | 2.3769 | -299.3976 | -272.8936 | -2.4943 | -2.5393 |
| 0.0217 | 2.01 | 3900 | 0.6122 | -2.3335 | -4.9229 | 0.8281 | 2.5894 | -306.1988 | -277.5699 | -2.4841 | -2.5272 |
| 0.0219 | 2.07 | 4000 | 0.6522 | -2.9890 | -6.0164 | 0.8281 | 3.0274 | -317.1334 | -284.1248 | -2.4105 | -2.4545 |
| 0.0119 | 2.12 | 4100 | 0.6922 | -3.4777 | -6.6749 | 0.7969 | 3.1972 | -323.7187 | -289.0121 | -2.4272 | -2.4699 |
| 0.0153 | 2.17 | 4200 | 0.6993 | -3.2406 | -6.6775 | 0.7969 | 3.4369 | -323.7453 | -286.6413 | -2.4047 | -2.4465 |
| 0.011 | 2.22 | 4300 | 0.7178 | -3.7991 | -7.4397 | 0.7656 | 3.6406 | -331.3667 | -292.2260 | -2.3843 | -2.4290 |
| 0.0072 | 2.27 | 4400 | 0.6840 | -3.3269 | -6.8021 | 0.8125 | 3.4752 | -324.9908 | -287.5042 | -2.4095 | -2.4536 |
| 0.0197 | 2.32 | 4500 | 0.7013 | -3.6890 | -7.3014 | 0.8125 | 3.6124 | -329.9841 | -291.1250 | -2.4118 | -2.4543 |
| 0.0182 | 2.37 | 4600 | 0.7476 | -3.8994 | -7.5366 | 0.8281 | 3.6372 | -332.3356 | -293.2291 | -2.4163 | -2.4565 |
| 0.0125 | 2.43 | 4700 | 0.7199 | -4.0560 | -7.5765 | 0.8438 | 3.5204 | -332.7345 | -294.7952 | -2.3699 | -2.4100 |
| 0.0082 | 2.48 | 4800 | 0.7048 | -3.6613 | -7.1356 | 0.875 | 3.4743 | -328.3255 | -290.8477 | -2.3925 | -2.4303 |
| 0.0118 | 2.53 | 4900 | 0.6976 | -3.7908 | -7.3152 | 0.8125 | 3.5244 | -330.1224 | -292.1431 | -2.3633 | -2.4047 |
| 0.0118 | 2.58 | 5000 | 0.7198 | -3.9049 | -7.5557 | 0.8281 | 3.6508 | -332.5271 | -293.2844 | -2.3764 | -2.4194 |
| 0.006 | 2.63 | 5100 | 0.7506 | -4.2118 | -7.9149 | 0.8125 | 3.7032 | -336.1194 | -296.3530 | -2.3407 | -2.3860 |
| 0.0143 | 2.68 | 5200 | 0.7408 | -4.2433 | -7.9802 | 0.8125 | 3.7369 | -336.7721 | -296.6682 | -2.3509 | -2.3946 |
| 0.0057 | 2.74 | 5300 | 0.7552 | -4.3392 | -8.0831 | 0.7969 | 3.7439 | -337.8013 | -297.6275 | -2.3388 | -2.3842 |
| 0.0138 | 2.79 | 5400 | 0.7404 | -4.2395 | -7.9762 | 0.8125 | 3.7367 | -336.7322 | -296.6304 | -2.3286 | -2.3737 |
| 0.0079 | 2.84 | 5500 | 0.7525 | -4.4466 | -8.2196 | 0.7812 | 3.7731 | -339.1662 | -298.7007 | -2.3200 | -2.3641 |
| 0.0077 | 2.89 | 5600 | 0.7520 | -4.5586 | -8.3485 | 0.7969 | 3.7899 | -340.4545 | -299.8206 | -2.3078 | -2.3517 |
| 0.0094 | 2.94 | 5700 | 0.7527 | -4.5542 | -8.3509 | 0.7812 | 3.7967 | -340.4790 | -299.7773 | -2.3062 | -2.3510 |
| 0.0054 | 2.99 | 5800 | 0.7520 | -4.5169 | -8.3079 | 0.7812 | 3.7911 | -340.0493 | -299.4038 | -2.3081 | -2.3530 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.14.0
## Citation
If you find Zephyr-7B-β is useful in your work, please cite it with:
```
@misc{tunstall2023zephyr,
title={Zephyr: Direct Distillation of LM Alignment},
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
year={2023},
eprint={2310.16944},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__zephyr-7b-beta)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 52.15 |
| ARC (25-shot) | 62.03 |
| HellaSwag (10-shot) | 84.36 |
| MMLU (5-shot) | 61.07 |
| TruthfulQA (0-shot) | 57.45 |
| Winogrande (5-shot) | 77.74 |
| GSM8K (5-shot) | 12.74 |
| DROP (3-shot) | 9.66 | | {"language": ["en"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k", "HuggingFaceH4/ultrafeedback_binarized"], "base_model": "mistralai/Mistral-7B-v0.1", "widget": [{"example_title": "Pirate!", "messages": [{"role": "system", "content": "You are a pirate chatbot who always responds with Arr!"}, {"role": "user", "content": "There's a llama on my lawn, how can I get rid of him?"}], "output": {"text": "Arr! 'Tis a puzzlin' matter, me hearty! A llama on yer lawn be a rare sight, but I've got a plan that might help ye get rid of 'im. Ye'll need to gather some carrots and hay, and then lure the llama away with the promise of a tasty treat. Once he's gone, ye can clean up yer lawn and enjoy the peace and quiet once again. But beware, me hearty, for there may be more llamas where that one came from! Arr!"}}], "pipeline_tag": "text-generation", "model-index": [{"name": "zephyr-7b-beta", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 62.03071672354948, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 84.35570603465445, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Drop (3-Shot)", "type": "drop", "split": "validation", "args": {"num_few_shot": 3}}, "metrics": [{"type": "f1", "value": 9.66243708053691, "name": "f1 score"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 57.44916942762855}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 12.736921910538287, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 61.07, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 77.7426992896606, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AlpacaEval", "type": "tatsu-lab/alpaca_eval"}, "metrics": [{"type": "unknown", "value": 0.906, "name": "win rate"}], "source": {"url": "https://tatsu-lab.github.io/alpaca_eval/"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MT-Bench", "type": "unknown"}, "metrics": [{"type": "unknown", "value": 7.34, "name": "score"}], "source": {"url": "https://huggingface.co/spaces/lmsys/mt-bench"}}]}]} | bwuzhang/test_5 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2310.16944",
"base_model:mistralai/Mistral-7B-v0.1",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T13:46:03+00:00 | [
"2305.18290",
"2310.16944"
] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #generated_from_trainer #conversational #en #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #arxiv-2305.18290 #arxiv-2310.16944 #base_model-mistralai/Mistral-7B-v0.1 #license-mit #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| <img src="URL alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Model Card for Zephyr 7B β
==========================
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). We found that removing the in-built alignment of these datasets boosted performance on MT Bench and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so. You can find more details in the technical report.
Model description
-----------------
* Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
* Language(s) (NLP): Primarily English
* License: MIT
* Finetuned from model: mistralai/Mistral-7B-v0.1
### Model Sources
* Repository: URL
* Demo: URL
* Chatbot Arena: Evaluate Zephyr 7B against 10+ LLMs in the LMSYS arena: URL
Performance
-----------
At the time of release, Zephyr-7B-β is the highest ranked 7B chat model on the MT-Bench and AlpacaEval benchmarks:
In particular, on several categories of MT-Bench, Zephyr-7B-β has strong performance compared to larger open models like Llama2-Chat-70B:
!image/png
However, on more complex tasks like coding and mathematics, Zephyr-7B-β lags behind proprietary models and more research is needed to close the gap.
Intended uses & limitations
---------------------------
The model was initially fine-tuned on a filtered and preprocessed of the 'UltraChat' dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with TRL's 'DPOTrainer' on the openbmb/UltraFeedback dataset, which contains 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities.
You can find the datasets used for training Zephyr-7B-β here
Here's how you can run the model using the 'pipeline()' function from Transformers:
Bias, Risks, and Limitations
----------------------------
Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model ('mistralai/Mistral-7B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.
Training and evaluation data
----------------------------
During DPO training, this model achieves the following results on the evaluation set:
* Loss: 0.7496
* Rewards/chosen: -4.5221
* Rewards/rejected: -8.3184
* Rewards/accuracies: 0.7812
* Rewards/margins: 3.7963
* Logps/rejected: -340.1541
* Logps/chosen: -299.4561
* Logits/rejected: -2.3081
* Logits/chosen: -2.3531
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-07
* train\_batch\_size: 2
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 16
* total\_train\_batch\_size: 32
* total\_eval\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3.0
### Training results
The table below shows the full set of DPO training metrics:
### Framework versions
* Transformers 4.35.0.dev0
* Pytorch 2.0.1+cu118
* Datasets 2.12.0
* Tokenizers 0.14.0
If you find Zephyr-7B-β is useful in your work, please cite it with:
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [
"### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n* Chatbot Arena: Evaluate Zephyr 7B against 10+ LLMs in the LMSYS arena: URL\n\n\nPerformance\n-----------\n\n\nAt the time of release, Zephyr-7B-β is the highest ranked 7B chat model on the MT-Bench and AlpacaEval benchmarks:\n\n\n\nIn particular, on several categories of MT-Bench, Zephyr-7B-β has strong performance compared to larger open models like Llama2-Chat-70B:\n\n\n!image/png\n\n\nHowever, on more complex tasks like coding and mathematics, Zephyr-7B-β lags behind proprietary models and more research is needed to close the gap.\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was initially fine-tuned on a filtered and preprocessed of the 'UltraChat' dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.\nWe then further aligned the model with TRL's 'DPOTrainer' on the openbmb/UltraFeedback dataset, which contains 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities.\n\n\nYou can find the datasets used for training Zephyr-7B-β here\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistralai/Mistral-7B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining and evaluation data\n----------------------------\n\n\nDuring DPO training, this model achieves the following results on the evaluation set:\n\n\n* Loss: 0.7496\n* Rewards/chosen: -4.5221\n* Rewards/rejected: -8.3184\n* Rewards/accuracies: 0.7812\n* Rewards/margins: 3.7963\n* Logps/rejected: -340.1541\n* Logps/chosen: -299.4561\n* Logits/rejected: -2.3081\n* Logits/chosen: -2.3531",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3.0",
"### Training results\n\n\nThe table below shows the full set of DPO training metrics:",
"### Framework versions\n\n\n* Transformers 4.35.0.dev0\n* Pytorch 2.0.1+cu118\n* Datasets 2.12.0\n* Tokenizers 0.14.0\n\n\nIf you find Zephyr-7B-β is useful in your work, please cite it with:\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #generated_from_trainer #conversational #en #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #arxiv-2305.18290 #arxiv-2310.16944 #base_model-mistralai/Mistral-7B-v0.1 #license-mit #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n* Chatbot Arena: Evaluate Zephyr 7B against 10+ LLMs in the LMSYS arena: URL\n\n\nPerformance\n-----------\n\n\nAt the time of release, Zephyr-7B-β is the highest ranked 7B chat model on the MT-Bench and AlpacaEval benchmarks:\n\n\n\nIn particular, on several categories of MT-Bench, Zephyr-7B-β has strong performance compared to larger open models like Llama2-Chat-70B:\n\n\n!image/png\n\n\nHowever, on more complex tasks like coding and mathematics, Zephyr-7B-β lags behind proprietary models and more research is needed to close the gap.\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was initially fine-tuned on a filtered and preprocessed of the 'UltraChat' dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.\nWe then further aligned the model with TRL's 'DPOTrainer' on the openbmb/UltraFeedback dataset, which contains 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities.\n\n\nYou can find the datasets used for training Zephyr-7B-β here\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistralai/Mistral-7B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining and evaluation data\n----------------------------\n\n\nDuring DPO training, this model achieves the following results on the evaluation set:\n\n\n* Loss: 0.7496\n* Rewards/chosen: -4.5221\n* Rewards/rejected: -8.3184\n* Rewards/accuracies: 0.7812\n* Rewards/margins: 3.7963\n* Logps/rejected: -340.1541\n* Logps/chosen: -299.4561\n* Logits/rejected: -2.3081\n* Logits/chosen: -2.3531",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3.0",
"### Training results\n\n\nThe table below shows the full set of DPO training metrics:",
"### Framework versions\n\n\n* Transformers 4.35.0.dev0\n* Pytorch 2.0.1+cu118\n* Datasets 2.12.0\n* Tokenizers 0.14.0\n\n\nIf you find Zephyr-7B-β is useful in your work, please cite it with:\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/stable-lol2 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:46:35+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hibikaze/gpt_0.084B_en-ja_step3815 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T13:49:08+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - sassad/face-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the /home/lch/face/images dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora"], "inference": true, "base_model": "runwayml/stable-diffusion-v1-5"} | sassad/face-lora | null | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-24T13:49:45+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #diffusers-training #lora #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us
|
# LoRA text2image fine-tuning - sassad/face-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the /home/lch/face/images dataset. You can find some example images in the following.
!img_0
!img_1
!img_2
!img_3
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# LoRA text2image fine-tuning - sassad/face-lora\nThese are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the /home/lch/face/images dataset. You can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #diffusers-training #lora #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us \n",
"# LoRA text2image fine-tuning - sassad/face-lora\nThese are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the /home/lch/face/images dataset. You can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
null | transformers |
# Pointwise MonoBERT trained on Baidu-ULTR with Inverse Propensity Scoring (IPS)
A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with the **pointwise sigmoid cross-entropy loss with IPS correction** suggested by [Bekker et al.](https://arxiv.org/abs/1809.03207) and [Saito et al.](https://arxiv.org/abs/1909.03601). The loss uses inverse propensity scoring to mitigate position bias in click data by weighting clicks on items higher that are less likely to be observed by users. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model).
## Test Results on Baidu-ULTR
Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries).
| Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 |
|------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------|
| [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 |
| [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 |
| [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 |
| [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 |
| [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 |
| [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 |
## Usage
Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository.
```Python
import jax.numpy as jnp
from src.model import IPSCrossEncoder
model = IPSCrossEncoder.from_pretrained(
"philipphager/baidu-ultr_uva-bert_ips-pointwise",
)
# Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens
batch = {
# Query_id for each document
"query_id": jnp.array([1, 1, 1, 1]),
# Document position in SERP
"positions": jnp.array([1, 2, 3, 4]),
# Token ids for: [CLS] Query [SEP] Document
"tokens": jnp.array([
[2, 21448, 21874, 21436, 1, 20206, 4012, 2860],
[2, 21448, 21874, 21436, 1, 16794, 4522, 2082],
[2, 21448, 21874, 21436, 1, 20206, 10082, 9773],
[2, 21448, 21874, 21436, 1, 2618, 8520, 2860],
]),
# Specify if a token id belongs to the query (0) or document (1)
"token_types": jnp.array([
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
]),
# Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False):
"attention_mask": jnp.array([
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
]),
}
outputs = model(batch, train=False)
print(outputs)
```
## Reference
```
@inproceedings{Hager2024BaiduULTR,
author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke},
title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset},
booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)},
organization = {ACM},
year = {2024},
}
```
| {"license": "mit", "datasets": ["philipphager/baidu-ultr-pretrain", "philipphager/baidu-ultr_uva-mlm-ctr"], "metrics": ["log-likelihood", "dcg@1", "dcg@3", "dcg@5", "dcg@10", "ndcg@10", "mrr@10"], "co2_eq_emissions": {"emissions": 2090, "source": "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france", "training_type": "Pre-training", "geographical_location": "Grenoble, France", "hardware_used": "4 NVIDIA H100-80GB GPUs"}} | philipphager/baidu-ultr_uva-bert_ips-pointwise | null | [
"transformers",
"safetensors",
"bert",
"dataset:philipphager/baidu-ultr-pretrain",
"dataset:philipphager/baidu-ultr_uva-mlm-ctr",
"arxiv:2207.03051",
"arxiv:1809.03207",
"arxiv:1909.03601",
"arxiv:2404.02543",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:51:04+00:00 | [
"2207.03051",
"1809.03207",
"1909.03601",
"2404.02543"
] | [] | TAGS
#transformers #safetensors #bert #dataset-philipphager/baidu-ultr-pretrain #dataset-philipphager/baidu-ultr_uva-mlm-ctr #arxiv-2207.03051 #arxiv-1809.03207 #arxiv-1909.03601 #arxiv-2404.02543 #license-mit #co2_eq_emissions #endpoints_compatible #region-us
| Pointwise MonoBERT trained on Baidu-ULTR with Inverse Propensity Scoring (IPS)
==============================================================================
A flax-based MonoBERT cross encoder trained on the Baidu-ULTR dataset with the pointwise sigmoid cross-entropy loss with IPS correction suggested by Bekker et al. and Saito et al.. The loss uses inverse propensity scoring to mitigate position bias in click data by weighting clicks on items higher that are less likely to be observed by users. For more info, read our paper and find the code for this model here.
Test Results on Baidu-ULTR
--------------------------
Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries).
Usage
-----
Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our training and evaluation scripts in our code repository.
Reference
---------
| [] | [
"TAGS\n#transformers #safetensors #bert #dataset-philipphager/baidu-ultr-pretrain #dataset-philipphager/baidu-ultr_uva-mlm-ctr #arxiv-2207.03051 #arxiv-1809.03207 #arxiv-1909.03601 #arxiv-2404.02543 #license-mit #co2_eq_emissions #endpoints_compatible #region-us \n"
] |
null | transformers |
# hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1`](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF --model firefly-qwen1.5-en-7b-dpo-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF --model firefly-qwen1.5-en-7b-dpo-v0.1.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m firefly-qwen1.5-en-7b-dpo-v0.1.Q4_K_M.gguf -n 128
```
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "basemodel": "Qwen/Qwen1.5-7B"} | hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:51:23+00:00 | [] | [] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us
|
# hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from 'YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us \n",
"# hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | marcelomathias/mistral_7b_lora_equus | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:51:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers | # Nazareth
<Gallery />
## Trigger words
You should use `Atidira` to trigger the image generation.
You should use `Dira` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Antiquarian/Nazareth/tree/main) them in the Files & versions tab.
| {"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "Hyper realistic, A RAW Photo of a ((nude)) girl, shirt lift,open clothes, open shirt, cleavage, nipple slip,huge breasts, boob, breast slip, underboob, sideboob, (((skin detail))), HD, perfect, perky boobs, high quality, detailed pussy, innie, perfect pussy, bright pussy, shaved pussy, no pubes, (small nipples), (small areola),young female, dark nipple, big areola, hard nipple, black nipple, very dark nipple, ultra HD, detailed nipple, photorealistic, topless, cleavage, shirt lift, big breast, large breasts, nude, naked, braless ,head scarf,clothes removed, <lora:AtidiraLoRA:1>", "parameters": {"negative_prompt": "(((smooth skin))), extra nipples, deformed body, (((deformed breast))), (((mutated breast))), deformed pussy, deformed nipples, low quality, medium quality, extra fingers, missing fingers, mutated fingers, missing nipples, missing breasts, extra breasts, missing arms, cgi, airbrush, cartoon, unequal boob size, oversized vagina, piercings, unnatural nipples, pussy hair, (((pubes))), smooth skin, dark nipples, gaussian, blur, blurry, (((hair))), (((hairs))), monochrome, "}, "output": {"url": "images/00036-2598909348.png"}}], "base_model": "runwayml/stable-diffusion-v1-5", "instance_prompt": "Atidira, Dira"} | Antiquarian/Nazareth | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"region:us"
] | null | 2024-04-24T13:52:15+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #region-us
| # Nazareth
<Gallery />
## Trigger words
You should use 'Atidira' to trigger the image generation.
You should use 'Dira' to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
| [
"# Nazareth\n\n<Gallery />",
"## Trigger words\n\nYou should use 'Atidira' to trigger the image generation.\n\nYou should use 'Dira' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #region-us \n",
"# Nazareth\n\n<Gallery />",
"## Trigger words\n\nYou should use 'Atidira' to trigger the image generation.\n\nYou should use 'Dira' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_bert_10K
This model is a fine-tuned version of [google-bert/bert-large-cased](https://huggingface.co/google-bert/bert-large-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 8
- eval_batch_size: 8
- seed: 8446
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google-bert/bert-large-cased", "model-index": [{"name": "results_bert_10K", "results": []}]} | Elkelouizajo/bert_mnli_10K | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:52:50+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-large-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# results_bert_10K
This model is a fine-tuned version of google-bert/bert-large-cased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 8
- eval_batch_size: 8
- seed: 8446
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# results_bert_10K\n\nThis model is a fine-tuned version of google-bert/bert-large-cased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.2\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 8446\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-large-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# results_bert_10K\n\nThis model is a fine-tuned version of google-bert/bert-large-cased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.2\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 8446\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# Listwise MonoBERT trained on Baidu-ULTR with Inverse Propensity Scoring (IPS)
A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with a **listwise softmax cross-entropy loss with IPS correction** adopted based on the work by [Ai et al](https://arxiv.org/abs/1804.05938). The loss uses inverse propensity scoring to mitigate position bias in click data by weighting clicks on items higher that are less likely to be observed by users. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model).
## Test Results on Baidu-ULTR
Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries).
| Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 |
|------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------|
| [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 |
| [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 |
| [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 |
| [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 |
| [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 |
| [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 |
## Usage
Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository.
```Python
import jax.numpy as jnp
from src.model import ListwiseIPSCrossEncoder
model = ListwiseIPSCrossEncoder.from_pretrained(
"philipphager/baidu-ultr_uva-bert_ips-listwise",
)
# Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens
batch = {
# Query_id for each document
"query_id": jnp.array([1, 1, 1, 1]),
# Document position in SERP
"positions": jnp.array([1, 2, 3, 4]),
# Token ids for: [CLS] Query [SEP] Document
"tokens": jnp.array([
[2, 21448, 21874, 21436, 1, 20206, 4012, 2860],
[2, 21448, 21874, 21436, 1, 16794, 4522, 2082],
[2, 21448, 21874, 21436, 1, 20206, 10082, 9773],
[2, 21448, 21874, 21436, 1, 2618, 8520, 2860],
]),
# Specify if a token id belongs to the query (0) or document (1)
"token_types": jnp.array([
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
]),
# Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False):
"attention_mask": jnp.array([
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
]),
}
outputs = model(batch, train=False)
print(outputs)
```
## Reference
```
@inproceedings{Hager2024BaiduULTR,
author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke},
title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset},
booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)},
organization = {ACM},
year = {2024},
}
```
| {"license": "mit", "datasets": ["philipphager/baidu-ultr-pretrain", "philipphager/baidu-ultr_uva-mlm-ctr"], "metrics": ["log-likelihood", "dcg@1", "dcg@3", "dcg@5", "dcg@10", "ndcg@10", "mrr@10"], "co2_eq_emissions": {"emissions": 2090, "source": "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france", "training_type": "Pre-training", "geographical_location": "Grenoble, France", "hardware_used": "4 NVIDIA H100-80GB GPUs"}} | philipphager/baidu-ultr_uva-bert_ips-listwise | null | [
"transformers",
"safetensors",
"bert",
"dataset:philipphager/baidu-ultr-pretrain",
"dataset:philipphager/baidu-ultr_uva-mlm-ctr",
"arxiv:2207.03051",
"arxiv:1804.05938",
"arxiv:2404.02543",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T13:53:15+00:00 | [
"2207.03051",
"1804.05938",
"2404.02543"
] | [] | TAGS
#transformers #safetensors #bert #dataset-philipphager/baidu-ultr-pretrain #dataset-philipphager/baidu-ultr_uva-mlm-ctr #arxiv-2207.03051 #arxiv-1804.05938 #arxiv-2404.02543 #license-mit #co2_eq_emissions #endpoints_compatible #region-us
| Listwise MonoBERT trained on Baidu-ULTR with Inverse Propensity Scoring (IPS)
=============================================================================
A flax-based MonoBERT cross encoder trained on the Baidu-ULTR dataset with a listwise softmax cross-entropy loss with IPS correction adopted based on the work by Ai et al. The loss uses inverse propensity scoring to mitigate position bias in click data by weighting clicks on items higher that are less likely to be observed by users. For more info, read our paper and find the code for this model here.
Test Results on Baidu-ULTR
--------------------------
Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries).
Usage
-----
Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our training and evaluation scripts in our code repository.
Reference
---------
| [] | [
"TAGS\n#transformers #safetensors #bert #dataset-philipphager/baidu-ultr-pretrain #dataset-philipphager/baidu-ultr_uva-mlm-ctr #arxiv-2207.03051 #arxiv-1804.05938 #arxiv-2404.02543 #license-mit #co2_eq_emissions #endpoints_compatible #region-us \n"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "meta-llama/Llama-2-13b-chat-hf"} | bmehrba/Llama-2-13b-chat-hf-fine-tuned-adapters_Epistemic_Llama13b_0.0_Seed105 | null | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"region:us"
] | null | 2024-04-24T13:53:23+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-meta-llama/Llama-2-13b-chat-hf #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-meta-llama/Llama-2-13b-chat-hf #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | null | RegalHyperus' mirror of the fixed KLMv7s pretrain by SeoulStreamingStation. Go to https://huggingface.co/SeoulStreamingStation/KLMv7s for the OG | {} | RegalHyperus/KLMv7sMirror | null | [
"region:us"
] | null | 2024-04-24T13:53:33+00:00 | [] | [] | TAGS
#region-us
| RegalHyperus' mirror of the fixed KLMv7s pretrain by SeoulStreamingStation. Go to URL for the OG | [] | [
"TAGS\n#region-us \n"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "meta-llama/Llama-2-13b-chat-hf"} | bmehrba/Llama-2-13b-chat-hf-fine-tuned_Epistemic_Llama13b_0.0_Seed105 | null | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"region:us"
] | null | 2024-04-24T13:53:42+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-meta-llama/Llama-2-13b-chat-hf #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-meta-llama/Llama-2-13b-chat-hf #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | JFernandoGRE/mixtral_8x7b_augmenteddemocracy_dups_all4_25 | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-24T13:54:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.