pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
text-generation
transformers
# Lovelace Medium Alpha1 550M parameter Transformer-XL style model trained on 100B tokens of The Pile! This model was originally trained for the "Direct Prefrence Heads" paper, but will also be used as the basis for much of my future research. All code used to train and run these models is available here: https://github.com/Avelina9X/memory-transformer-pt4 ## Model Architecture | Name | Value | | --- | --- | | Total Parameters | 551M | | Non-Embedding Parameters | 512M | | Vocab Size | 50272 | | \\(d_\text{vocab}\\) | 768 | | \\(d_\text{model}\\) | 1536 | | \\(n_\text{layers}\\) | 18 | | FFN Activation | SwiGLU | | \\(d_\text{ffn}\\) | 4096 | | Attention Type | Full | | Positon Embedding | Reversed RoPE with ABF | | \\(n_\text{heads}\\) | 24 | | \\(d_\text{key}\\) | 64 | | Trained Context | 2048 | | Trained Memory | 2048 | | Max Inference Context | 4096 | ## Model Collection | Model | Link | | --- | --- | | Pre-Trained Model | [lovelace-medium-alpha1](https://huggingface.co/Avelina/lovelace-medium-alpha1) | | Fine-Tuned Model | lovelace-medium-alpha1-instruct | | DPH Aligned Model | lovelace-medium-alpha1-instruct-hf | | DPH Aligned Model (Multiple Heads) | lovelace-medium-alpha1-instruct-hf-multihead |
{"language": ["en"], "license": "bsd-3-clause", "library_name": "transformers", "datasets": ["EleutherAI/pile"]}
Avelina/lovelace-medium-alpha1
null
[ "transformers", "safetensors", "lsw_transformer", "text-generation", "en", "dataset:EleutherAI/pile", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T12:43:00+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #lsw_transformer #text-generation #en #dataset-EleutherAI/pile #license-bsd-3-clause #autotrain_compatible #endpoints_compatible #region-us
Lovelace Medium Alpha1 ====================== 550M parameter Transformer-XL style model trained on 100B tokens of The Pile! This model was originally trained for the "Direct Prefrence Heads" paper, but will also be used as the basis for much of my future research. All code used to train and run these models is available here: URL Model Architecture ------------------ Model Collection ----------------
[]
[ "TAGS\n#transformers #safetensors #lsw_transformer #text-generation #en #dataset-EleutherAI/pile #license-bsd-3-clause #autotrain_compatible #endpoints_compatible #region-us \n" ]
summarization
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-multinews This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4152 - Rouge1: 14.6798 - Rouge2: 5.2044 - Rougel: 11.2346 - Rougelsum: 12.9794 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.8162 | 1.0 | 506 | 2.4807 | 14.5888 | 4.9839 | 11.0896 | 12.9 | 20.0 | | 2.6122 | 2.0 | 1012 | 2.4371 | 14.9075 | 5.3211 | 11.2711 | 13.1998 | 20.0 | | 2.518 | 3.0 | 1518 | 2.4141 | 14.8607 | 5.2903 | 11.332 | 13.1363 | 20.0 | | 2.4585 | 4.0 | 2024 | 2.4246 | 14.7346 | 5.2263 | 11.2281 | 13.0277 | 20.0 | | 2.4206 | 5.0 | 2530 | 2.4152 | 14.6798 | 5.2044 | 11.2346 | 12.9794 | 20.0 | ### Framework versions - Transformers 4.40.1 - Pytorch 1.13.1+cu117 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "metrics": ["rouge"], "base_model": "facebook/bart-base", "pipeline_tag": "summarization", "model-index": [{"name": "bart-base-finetuned-multinews", "results": []}]}
Vexemous/bart-base-finetuned-multinews
null
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "summarization", "base_model:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T12:43:15+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bart #text2text-generation #summarization #base_model-facebook/bart-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-multinews ============================= This model is a fine-tuned version of facebook/bart-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.4152 * Rouge1: 14.6798 * Rouge2: 5.2044 * Rougel: 11.2346 * Rougelsum: 12.9794 * Gen Len: 20.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 1.13.1+cu117 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 1.13.1+cu117\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #summarization #base_model-facebook/bart-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 1.13.1+cu117\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BV_symbols_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9177 - Accuracy: 0.8697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.6848 | 0.9892 | 69 | 1.4784 | 0.7017 | | 1.1065 | 1.9928 | 139 | 1.0834 | 0.8167 | | 0.9403 | 2.9677 | 207 | 0.9177 | 0.8697 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "BV_symbols_model", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train[:30%]", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8697214734950584, "name": "Accuracy"}]}]}]}
diegozambrana/BV_symbols_model
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T12:44:40+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
BV\_symbols\_model ================== This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.9177 * Accuracy: 0.8697 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
{"library_name": "peft", "base_model": "openlm-research/open_llama_3b_v2"}
yiyic/llama3b-text-ent-lora-clf-epoch-3
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openlm-research/open_llama_3b_v2", "region:us" ]
null
2024-04-26T12:44:46+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-openlm-research/open_llama_3b_v2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.7.2.dev0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.7.2.dev0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-openlm-research/open_llama_3b_v2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.7.2.dev0" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0424HMA19 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 60 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4874 | 0.09 | 10 | 0.1448 | | 0.1419 | 0.18 | 20 | 0.1072 | | 0.1008 | 0.27 | 30 | 0.0774 | | 0.0902 | 0.36 | 40 | 0.0720 | | 0.0783 | 0.45 | 50 | 0.0760 | | 0.0854 | 0.54 | 60 | 0.0870 | | 0.09 | 0.63 | 70 | 0.0816 | | 0.0853 | 0.73 | 80 | 0.0755 | | 0.0815 | 0.82 | 90 | 0.0723 | | 0.083 | 0.91 | 100 | 0.0683 | | 0.0817 | 1.0 | 110 | 0.0645 | | 0.0536 | 1.09 | 120 | 0.0760 | | 0.0673 | 1.18 | 130 | 0.0727 | | 0.0618 | 1.27 | 140 | 0.0666 | | 0.06 | 1.36 | 150 | 0.0729 | | 0.07 | 1.45 | 160 | 0.0656 | | 0.0597 | 1.54 | 170 | 0.0744 | | 0.0663 | 1.63 | 180 | 0.0637 | | 0.0578 | 1.72 | 190 | 0.0623 | | 0.0653 | 1.81 | 200 | 0.0632 | | 0.0595 | 1.9 | 210 | 0.0694 | | 0.0528 | 1.99 | 220 | 0.0606 | | 0.0396 | 2.08 | 230 | 0.0618 | | 0.0348 | 2.18 | 240 | 0.0713 | | 0.0349 | 2.27 | 250 | 0.0672 | | 0.0335 | 2.36 | 260 | 0.0655 | | 0.0352 | 2.45 | 270 | 0.0655 | | 0.0318 | 2.54 | 280 | 0.0679 | | 0.0301 | 2.63 | 290 | 0.0691 | | 0.0313 | 2.72 | 300 | 0.0681 | | 0.0332 | 2.81 | 310 | 0.0674 | | 0.0326 | 2.9 | 320 | 0.0673 | | 0.0343 | 2.99 | 330 | 0.0672 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA19", "results": []}]}
Litzy619/V0424HMA19
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-26T12:44:48+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
V0424HMA19 ========== This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0672 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 60 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
null
# UVMap-ID: A Controllable and Personalized UV Map Generative Model [Paper](https://arxiv.org/abs/2404.14568)
{"license": "apache-2.0"}
Jichaozhang/UVMap-ID
null
[ "arxiv:2404.14568", "license:apache-2.0", "region:us" ]
null
2024-04-26T12:45:30+00:00
[ "2404.14568" ]
[]
TAGS #arxiv-2404.14568 #license-apache-2.0 #region-us
# UVMap-ID: A Controllable and Personalized UV Map Generative Model Paper
[ "# UVMap-ID: A Controllable and Personalized UV Map Generative Model\nPaper" ]
[ "TAGS\n#arxiv-2404.14568 #license-apache-2.0 #region-us \n", "# UVMap-ID: A Controllable and Personalized UV Map Generative Model\nPaper" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # virus_pythia_14_1024_cross_entropy This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 80 - eval_batch_size: 80 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "virus_pythia_14_1024_cross_entropy", "results": []}]}
Hack90/virus_pythia_14_1024_cross_entropy
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T12:45:39+00:00
[]
[]
TAGS #transformers #safetensors #gpt_neox #text-generation #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# virus_pythia_14_1024_cross_entropy This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 80 - eval_batch_size: 80 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# virus_pythia_14_1024_cross_entropy\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 80\n- eval_batch_size: 80\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #gpt_neox #text-generation #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# virus_pythia_14_1024_cross_entropy\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 80\n- eval_batch_size: 80\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-2 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-2", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-2
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T12:46:27+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-2 This model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
![image/png](https://huggingface.co/Dunjeon/lostmagic-RP-GGUF/resolve/main/images/00116-439147115.png) LostMagic-RP_8B Version 0.42624 Uncensored, Creative, Immersive, Role Play AI Settings: ``` Prompt Format Chat or Chat Instruct (Silly Tavern Default): System Message Here User: input Bot: ``` Parameters: ```json {"max_context_length", 8192}, {"max_length", 120}, {"rep_pen", 1.03}, {"rep_pen_slope", 0.70}, {"rep_pen_range", 320}, {"temperature", 1.25}, {"tfs", 1.0}, {"top_a", 0}, {"top_k", 0}, {"top_p", 1.0}, {"min_p", 0.1}, {"typical", 1.0}, {"presence_penalty", 0}, {"mirostat", 0}, {"mirostat_tau", 5}, {"mirostat_eta", 0.1}, ```
{"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["roleplay", "uncensored", "lewd", "mature", "not-for-all-audiences", "Llama 3", "8b"], "pipeline_tag": "text-generation"}
Dunjeon/lostmagic-RP-GGUF
null
[ "transformers", "roleplay", "uncensored", "lewd", "mature", "not-for-all-audiences", "Llama 3", "8b", "text-generation", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T12:46:47+00:00
[]
[ "en" ]
TAGS #transformers #roleplay #uncensored #lewd #mature #not-for-all-audiences #Llama 3 #8b #text-generation #en #license-cc-by-nc-4.0 #endpoints_compatible #region-us
!image/png LostMagic-RP_8B Version 0.42624 Uncensored, Creative, Immersive, Role Play AI Settings: Parameters:
[]
[ "TAGS\n#transformers #roleplay #uncensored #lewd #mature #not-for-all-audiences #Llama 3 #8b #text-generation #en #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-410m_mz-131f_PasswordMatch This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-410m", "model-index": [{"name": "robust_llm_pythia-410m_mz-131f_PasswordMatch", "results": []}]}
AlignmentResearch/robust_llm_pythia-410m_mz-131f_PasswordMatch
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T12:49:02+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-410m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-410m_mz-131f_PasswordMatch This model is a fine-tuned version of EleutherAI/pythia-410m on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-410m_mz-131f_PasswordMatch\n\nThis model is a fine-tuned version of EleutherAI/pythia-410m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-410m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-410m_mz-131f_PasswordMatch\n\nThis model is a fine-tuned version of EleutherAI/pythia-410m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-70m_mz-131f_IMDB This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-131f_IMDB", "results": []}]}
AlignmentResearch/robust_llm_pythia-70m_mz-131f_IMDB
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T12:49:36+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-70m_mz-131f_IMDB This model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-70m_mz-131f_IMDB\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-70m_mz-131f_IMDB\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0424HMA20 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0675 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 60 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8142 | 0.09 | 10 | 0.3516 | | 0.1881 | 0.18 | 20 | 0.1201 | | 0.1155 | 0.27 | 30 | 0.0873 | | 0.0936 | 0.36 | 40 | 0.0807 | | 0.0868 | 0.45 | 50 | 0.0851 | | 0.0884 | 0.54 | 60 | 0.0797 | | 0.0825 | 0.63 | 70 | 0.0671 | | 0.0726 | 0.73 | 80 | 0.0749 | | 0.0803 | 0.82 | 90 | 0.0740 | | 0.0796 | 0.91 | 100 | 0.0675 | | 0.0722 | 1.0 | 110 | 0.0688 | | 0.0639 | 1.09 | 120 | 0.0634 | | 0.0642 | 1.18 | 130 | 0.0750 | | 0.0638 | 1.27 | 140 | 0.0678 | | 0.0628 | 1.36 | 150 | 0.0673 | | 0.0645 | 1.45 | 160 | 0.0682 | | 0.0575 | 1.54 | 170 | 0.0695 | | 0.0635 | 1.63 | 180 | 0.0652 | | 0.0534 | 1.72 | 190 | 0.0661 | | 0.0682 | 1.81 | 200 | 0.0620 | | 0.0551 | 1.9 | 210 | 0.0655 | | 0.0539 | 1.99 | 220 | 0.0631 | | 0.0342 | 2.08 | 230 | 0.0705 | | 0.0331 | 2.18 | 240 | 0.0829 | | 0.0313 | 2.27 | 250 | 0.0669 | | 0.0286 | 2.36 | 260 | 0.0698 | | 0.0324 | 2.45 | 270 | 0.0721 | | 0.0288 | 2.54 | 280 | 0.0713 | | 0.0294 | 2.63 | 290 | 0.0700 | | 0.0322 | 2.72 | 300 | 0.0682 | | 0.0313 | 2.81 | 310 | 0.0675 | | 0.029 | 2.9 | 320 | 0.0676 | | 0.0359 | 2.99 | 330 | 0.0675 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA20", "results": []}]}
Litzy619/V0424HMA20
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-26T12:50:12+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
V0424HMA20 ========== This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0675 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 60 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
text-generation
transformers
The tnayaj-8B model is an innovative open-source language model specifically engineered for the biomedical domain. Crafted by Jayant AI Labs, this model harnesses state-of-the-art methodologies to achieve unparalleled performance across various biomedical tasks. 🏥 Specialization in medicine: tnayaj-8B caters to the intricate linguistic and informational demands of the medical and life sciences realms. Its refinement stems from extensive training on a comprehensive biomedical dataset, enabling precise and articulate text generation within the domain. 🎓 Exceptional Performance: Boasting a staggering 8 billion parameters 🧠 Advanced Training Methodologies: tnayaj-8B builds upon the foundational prowess of the Meta-Llama-3-8B-Instruct .It integrates the DPO dataset and a tailored array of medical instruction data for refinement. Central to its training regimen are meticulously curated components, including: --- license: apache-2.0 ---
{"license": "apache-2.0"}
Jayant9928/tnayajv2.0
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T12:51:16+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
The tnayaj-8B model is an innovative open-source language model specifically engineered for the biomedical domain. Crafted by Jayant AI Labs, this model harnesses state-of-the-art methodologies to achieve unparalleled performance across various biomedical tasks. Specialization in medicine: tnayaj-8B caters to the intricate linguistic and informational demands of the medical and life sciences realms. Its refinement stems from extensive training on a comprehensive biomedical dataset, enabling precise and articulate text generation within the domain. Exceptional Performance: Boasting a staggering 8 billion parameters Advanced Training Methodologies: tnayaj-8B builds upon the foundational prowess of the Meta-Llama-3-8B-Instruct .It integrates the DPO dataset and a tailored array of medical instruction data for refinement. Central to its training regimen are meticulously curated components, including: --- license: apache-2.0 ---
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "meta-llama/Meta-Llama-3-8B"}
PrunaAI/meta-llama-Meta-Llama-3-8B-AWQ-4bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:meta-llama/Meta-Llama-3-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T12:54:46+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #base_model-meta-llama/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with awq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #base_model-meta-llama/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # virus_pythia_14_1024_headless This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 40 - eval_batch_size: 40 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "virus_pythia_14_1024_headless", "results": []}]}
Hack90/virus_pythia_14_1024_headless
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T12:57:23+00:00
[]
[]
TAGS #transformers #safetensors #gpt_neox #text-generation #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# virus_pythia_14_1024_headless This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 40 - eval_batch_size: 40 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# virus_pythia_14_1024_headless\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 40\n- eval_batch_size: 40\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #gpt_neox #text-generation #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# virus_pythia_14_1024_headless\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 40\n- eval_batch_size: 40\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-alzheimers This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8319 - Accuracy: 0.5953 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.0035 | 0.9778 | 22 | 0.9198 | 0.5594 | | 0.9062 | 2.0 | 45 | 0.8479 | 0.6094 | | 0.8726 | 2.9333 | 66 | 0.8319 | 0.5953 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "swin-tiny-patch4-window7-224-finetuned-alzheimers", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.5953125, "name": "Accuracy"}]}]}]}
rhlc/swin-tiny-patch4-window7-224-finetuned-alzheimers
null
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T12:57:28+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
swin-tiny-patch4-window7-224-finetuned-alzheimers ================================================= This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.8319 * Accuracy: 0.5953 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 256 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/maywell/miqu-evil-dpo <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF/resolve/main/miqu-evil-dpo.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["not-for-all-audiences"], "base_model": "maywell/miqu-evil-dpo", "license_link": "LICENSE", "license_name": "miqu-license", "quantized_by": "mradermacher"}
mradermacher/miqu-evil-dpo-GGUF
null
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:maywell/miqu-evil-dpo", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-26T12:57:37+00:00
[]
[ "en" ]
TAGS #transformers #gguf #not-for-all-audiences #en #base_model-maywell/miqu-evil-dpo #license-other #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #not-for-all-audiences #en #base_model-maywell/miqu-evil-dpo #license-other #endpoints_compatible #region-us \n" ]
reinforcement-learning
sample-factory
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r magixn/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
{"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "10.94 +/- 5.32", "name": "mean_reward", "verified": false}]}]}]}
magixn/rl_course_vizdoom_health_gathering_supreme
null
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-26T12:57:47+00:00
[]
[]
TAGS #sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
A(n) APPO model trained on the doom_health_gathering_supreme environment. This model was trained using Sample-Factory 2.0: URL Documentation for how to use Sample-Factory can be found at URL ## Downloading the model After installing Sample-Factory, download the model with: ## Using the model To run the model after download, use the 'enjoy' script corresponding to this environment: You can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag. See URL for more details ## Training with this model To continue training with this model, use the 'train' script corresponding to this environment: Note, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at.
[ "## Downloading the model\n\nAfter installing Sample-Factory, download the model with:", "## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details", "## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at." ]
[ "TAGS\n#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "## Downloading the model\n\nAfter installing Sample-Factory, download the model with:", "## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details", "## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at." ]
reinforcement-learning
null
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-Pixelcopter-PLE-v0", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "44.80 +/- 27.15", "name": "mean_reward", "verified": false}]}]}]}
hossniper/Reinforce-Pixelcopter-PLE-v0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-26T12:57:58+00:00
[]
[]
TAGS #Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing Pixelcopter-PLE-v0 This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
ddd20/mistral_7b_legal_version
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-26T12:59:20+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
nrbhole/invoices-donut-model-v2
null
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:00:08+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-14m_mz-131f_IMDB This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-131f_IMDB", "results": []}]}
AlignmentResearch/robust_llm_pythia-14m_mz-131f_IMDB
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T13:00:43+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-14m_mz-131f_IMDB This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-14m_mz-131f_IMDB\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-14m_mz-131f_IMDB\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tom-brady/6-254
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:01:08+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tom-brady/6-212
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:01:21+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tom-brady/6-220
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:01:45+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tom-brady/6-211
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:01:46+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-finetuned-alzheimers This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0160 - Accuracy: 0.9969 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.2996 | 0.9778 | 22 | 0.3897 | 0.8438 | | 0.3703 | 2.0 | 45 | 0.3595 | 0.8594 | | 0.3087 | 2.9778 | 67 | 0.3777 | 0.8625 | | 0.486 | 4.0 | 90 | 0.4530 | 0.8187 | | 0.3307 | 4.9778 | 112 | 0.4560 | 0.8234 | | 0.306 | 6.0 | 135 | 0.3471 | 0.8672 | | 0.3005 | 6.9778 | 157 | 0.3025 | 0.8859 | | 0.319 | 8.0 | 180 | 0.2451 | 0.8984 | | 0.3489 | 8.9778 | 202 | 0.1814 | 0.9281 | | 0.3251 | 10.0 | 225 | 0.2451 | 0.9156 | | 0.3034 | 10.9778 | 247 | 0.1566 | 0.9406 | | 0.2746 | 12.0 | 270 | 0.2493 | 0.8922 | | 0.2369 | 12.9778 | 292 | 0.1622 | 0.9375 | | 0.2231 | 14.0 | 315 | 0.1781 | 0.9359 | | 0.2281 | 14.9778 | 337 | 0.1268 | 0.9531 | | 0.2001 | 16.0 | 360 | 0.2431 | 0.9141 | | 0.183 | 16.9778 | 382 | 0.1017 | 0.9625 | | 0.1891 | 18.0 | 405 | 0.1802 | 0.9391 | | 0.1862 | 18.9778 | 427 | 0.0869 | 0.9766 | | 0.1935 | 20.0 | 450 | 0.1079 | 0.9688 | | 0.1797 | 20.9778 | 472 | 0.1250 | 0.9563 | | 0.1605 | 22.0 | 495 | 0.0655 | 0.9719 | | 0.1848 | 22.9778 | 517 | 0.0806 | 0.9766 | | 0.1498 | 24.0 | 540 | 0.1116 | 0.9578 | | 0.1394 | 24.9778 | 562 | 0.0807 | 0.9672 | | 0.1584 | 26.0 | 585 | 0.0525 | 0.9797 | | 0.1302 | 26.9778 | 607 | 0.0513 | 0.9828 | | 0.1356 | 28.0 | 630 | 0.0420 | 0.9875 | | 0.1101 | 28.9778 | 652 | 0.0354 | 0.9875 | | 0.1227 | 30.0 | 675 | 0.0583 | 0.9766 | | 0.1158 | 30.9778 | 697 | 0.0253 | 0.9906 | | 0.117 | 32.0 | 720 | 0.0231 | 0.9906 | | 0.1022 | 32.9778 | 742 | 0.0726 | 0.9797 | | 0.1221 | 34.0 | 765 | 0.0160 | 0.9969 | | 0.0956 | 34.9778 | 787 | 0.0482 | 0.9844 | | 0.0856 | 36.0 | 810 | 0.0256 | 0.9875 | | 0.0996 | 36.9778 | 832 | 0.0211 | 0.9906 | | 0.0848 | 38.0 | 855 | 0.0446 | 0.9797 | | 0.1001 | 38.9778 | 877 | 0.0274 | 0.9875 | | 0.0976 | 40.0 | 900 | 0.0225 | 0.9922 | | 0.0864 | 40.9778 | 922 | 0.0207 | 0.9922 | | 0.0865 | 42.0 | 945 | 0.0193 | 0.9969 | | 0.0773 | 42.9778 | 967 | 0.0203 | 0.9922 | | 0.075 | 44.0 | 990 | 0.0131 | 0.9969 | | 0.0761 | 44.9778 | 1012 | 0.0129 | 0.9938 | | 0.0624 | 46.0 | 1035 | 0.0114 | 0.9969 | | 0.0557 | 46.9778 | 1057 | 0.0102 | 0.9953 | | 0.0708 | 48.0 | 1080 | 0.0116 | 0.9953 | | 0.0667 | 48.8889 | 1100 | 0.0131 | 0.9953 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "facebook/vit-msn-small", "model-index": [{"name": "vit-msn-small-finetuned-alzheimers", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.996875, "name": "Accuracy"}]}]}]}
rhlc/vit-msn-small-finetuned-alzheimers
null
[ "transformers", "tensorboard", "safetensors", "vit_msn", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/vit-msn-small", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:04:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit_msn #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/vit-msn-small #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
vit-msn-small-finetuned-alzheimers ================================== This model is a fine-tuned version of facebook/vit-msn-small on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.0160 * Accuracy: 0.9969 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 256 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 50 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 50", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit_msn #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/vit-msn-small #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 50", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vaatsav06/Llama3_medqa1
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:06:33+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Mihaj/wav2vec2-large-uralic-voxpopuli-v2-karelian-CodeSwitching_with_pitch_and_tempo_aug
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:07:28+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
presencesw/vistral_test
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:08:15+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
Shure-Dev/llava-vima
null
[ "transformers", "safetensors", "llava", "pretraining", "trl", "sft", "arxiv:1910.09700", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-26T13:08:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llava #pretraining #trl #sft #arxiv-1910.09700 #endpoints_compatible #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llava #pretraining #trl #sft #arxiv-1910.09700 #endpoints_compatible #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
sentence-similarity
sentence-transformers
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 2048 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1401 with parameters: ``` {'batch_size': 256, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `__main__.MultipleNegativesRankingLoss_with_logging` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: LlamaModel (1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "sentence_croissant_alpha_v0.3", "results": [{"task": {"type": "Clustering"}, "dataset": {"name": "MTEB AlloProfClusteringP2P", "type": "lyon-nlp/alloprof", "config": "default", "split": "test", "revision": "392ba3f5bcc8c51f578786c1fc3dae648662cb9b"}, "metrics": [{"type": "v_measure", "value": 56.72912207023513}, {"type": "v_measures", "value": [0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886, 0.5320130285438164, 0.5262623550285312, 0.5801017400160106, 0.5959165699319396, 0.5834996150492608, 0.5569839493118243, 0.6099665491090271, 0.5780727185697752, 0.4988023041518384, 0.6112933773114886]}, {"type": "v_measure", "value": 37.62128894914382}, {"type": "v_measures", "value": [0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827, 0.35401974034534334, 0.3980248155652733, 0.4010417412014714, 0.3771452994293956, 0.3279249606358475, 0.45943544754326515, 0.40148836454988795, 0.36775719216316904, 0.3211924643714899, 0.35409886910923827]}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AlloprofReranking", "type": "lyon-nlp/mteb-fr-reranking-alloprof-s2p", "config": "default", "split": "test", "revision": "e40c8a63ce02da43200eccb5b0846fcaa888f562"}, "metrics": [{"type": "map", "value": 68.30621526032894}, {"type": "mrr", "value": 69.67719384817829}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB AlloprofRetrieval", "type": "lyon-nlp/alloprof", "config": "default", "split": "test", "revision": "fcf295ea64c750f41fadbaa37b9b861558e1bfbd"}, "metrics": [{"type": "map_at_1", "value": 32.254}, {"type": "map_at_10", "value": 43.834}, {"type": "map_at_100", "value": 44.728}, {"type": "map_at_1000", "value": 44.769999999999996}, {"type": "map_at_20", "value": 44.361}, {"type": "map_at_3", "value": 40.753}, {"type": "map_at_5", "value": 42.486000000000004}, {"type": "mrr_at_1", "value": 32.254}, {"type": "mrr_at_10", "value": 43.834}, {"type": "mrr_at_100", "value": 44.728}, {"type": "mrr_at_1000", "value": 44.769999999999996}, {"type": "mrr_at_20", "value": 44.361}, {"type": "mrr_at_3", "value": 40.753}, {"type": "mrr_at_5", "value": 42.486000000000004}, {"type": "ndcg_at_1", "value": 32.254}, {"type": "ndcg_at_10", "value": 49.845}, {"type": "ndcg_at_100", "value": 54.37800000000001}, {"type": "ndcg_at_1000", "value": 55.498000000000005}, {"type": "ndcg_at_20", "value": 51.772}, {"type": "ndcg_at_3", "value": 43.486000000000004}, {"type": "ndcg_at_5", "value": 46.594}, {"type": "precision_at_1", "value": 32.254}, {"type": "precision_at_10", "value": 6.891}, {"type": "precision_at_100", "value": 0.905}, {"type": "precision_at_1000", "value": 0.099}, {"type": "precision_at_20", "value": 3.8280000000000003}, {"type": "precision_at_3", "value": 17.127}, {"type": "precision_at_5", "value": 11.779}, {"type": "recall_at_1", "value": 32.254}, {"type": "recall_at_10", "value": 68.91199999999999}, {"type": "recall_at_100", "value": 90.501}, {"type": "recall_at_1000", "value": 99.309}, {"type": "recall_at_20", "value": 76.554}, {"type": "recall_at_3", "value": 51.382000000000005}, {"type": "recall_at_5", "value": 58.894999999999996}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (fr)", "type": "mteb/amazon_reviews_multi", "config": "fr", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 35.106}, {"type": "f1", "value": 34.825583560299656}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB BSARDRetrieval", "type": "maastrichtlawtech/bsard", "config": "default", "split": "test", "revision": "5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59"}, "metrics": [{"type": "map_at_1", "value": 0.0}, {"type": "map_at_10", "value": 0.15}, {"type": "map_at_100", "value": 0.19499999999999998}, {"type": "map_at_1000", "value": 0.243}, {"type": "map_at_20", "value": 0.18}, {"type": "map_at_3", "value": 0.15}, {"type": "map_at_5", "value": 0.15}, {"type": "mrr_at_1", "value": 0.0}, {"type": "mrr_at_10", "value": 0.15}, {"type": "mrr_at_100", "value": 0.19499999999999998}, {"type": "mrr_at_1000", "value": 0.243}, {"type": "mrr_at_20", "value": 0.18}, {"type": "mrr_at_3", "value": 0.15}, {"type": "mrr_at_5", "value": 0.15}, {"type": "ndcg_at_1", "value": 0.0}, {"type": "ndcg_at_10", "value": 0.22499999999999998}, {"type": "ndcg_at_100", "value": 0.545}, {"type": "ndcg_at_1000", "value": 2.622}, {"type": "ndcg_at_20", "value": 0.338}, {"type": "ndcg_at_3", "value": 0.22499999999999998}, {"type": "ndcg_at_5", "value": 0.22499999999999998}, {"type": "precision_at_1", "value": 0.0}, {"type": "precision_at_10", "value": 0.045}, {"type": "precision_at_100", "value": 0.023}, {"type": "precision_at_1000", "value": 0.02}, {"type": "precision_at_20", "value": 0.045}, {"type": "precision_at_3", "value": 0.15}, {"type": "precision_at_5", "value": 0.09}, {"type": "recall_at_1", "value": 0.0}, {"type": "recall_at_10", "value": 0.44999999999999996}, {"type": "recall_at_100", "value": 2.252}, {"type": "recall_at_1000", "value": 20.27}, {"type": "recall_at_20", "value": 0.901}, {"type": "recall_at_3", "value": 0.44999999999999996}, {"type": "recall_at_5", "value": 0.44999999999999996}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FQuADRetrieval", "type": "manu/fquad2_test", "config": "default", "split": "test", "revision": "5384ce827bbc2156d46e6fcba83d75f8e6e1b4a6"}, "metrics": [{"type": "map_at_1", "value": 53.0}, {"type": "map_at_10", "value": 65.393}, {"type": "map_at_100", "value": 65.791}, {"type": "map_at_1000", "value": 65.79899999999999}, {"type": "map_at_20", "value": 65.644}, {"type": "map_at_3", "value": 62.74999999999999}, {"type": "map_at_5", "value": 64.075}, {"type": "mrr_at_1", "value": 53.0}, {"type": "mrr_at_10", "value": 65.393}, {"type": "mrr_at_100", "value": 65.791}, {"type": "mrr_at_1000", "value": 65.79899999999999}, {"type": "mrr_at_20", "value": 65.644}, {"type": "mrr_at_3", "value": 62.74999999999999}, {"type": "mrr_at_5", "value": 64.075}, {"type": "ndcg_at_1", "value": 53.0}, {"type": "ndcg_at_10", "value": 71.38600000000001}, {"type": "ndcg_at_100", "value": 73.275}, {"type": "ndcg_at_1000", "value": 73.42}, {"type": "ndcg_at_20", "value": 72.28099999999999}, {"type": "ndcg_at_3", "value": 65.839}, {"type": "ndcg_at_5", "value": 68.217}, {"type": "precision_at_1", "value": 53.0}, {"type": "precision_at_10", "value": 9.025}, {"type": "precision_at_100", "value": 0.9900000000000001}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_20", "value": 4.688}, {"type": "precision_at_3", "value": 24.917}, {"type": "precision_at_5", "value": 16.1}, {"type": "recall_at_1", "value": 53.0}, {"type": "recall_at_10", "value": 90.25}, {"type": "recall_at_100", "value": 99.0}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_20", "value": 93.75}, {"type": "recall_at_3", "value": 74.75}, {"type": "recall_at_5", "value": 80.5}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB HALClusteringS2S", "type": "lyon-nlp/clustering-hal-s2s", "config": "default", "split": "test", "revision": "e06ebbbb123f8144bef1a5d18796f3dec9ae2915"}, "metrics": [{"type": "v_measure", "value": 25.756762769106768}, {"type": "v_measures", "value": [0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097, 0.27992488796059395, 0.2764506771771727, 0.3243693580623437, 0.2779803803468105, 0.23611069524035522, 0.20081340028652678, 0.20920466845471178, 0.24107059599883462, 0.2301770805728178, 0.2995745328105097]}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MLSUMClusteringP2P", "type": "mlsum", "config": "fr", "split": "test", "revision": "b5d54f8f3b61ae17845046286940f03c6bc79bc7"}, "metrics": [{"type": "v_measure", "value": 41.82320155017088}, {"type": "v_measures", "value": [0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863, 0.4222470060684208, 0.4358887035247771, 0.4288044607351393, 0.4278211144128014, 0.3701492929109167, 0.42700292743096274, 0.4396567571802029, 0.41889525107583575, 0.4178682135997126, 0.39398642807831863]}, {"type": "v_measure", "value": 41.83097630331037}, {"type": "v_measures", "value": [0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202, 0.40556802081735377, 0.43455329357006206, 0.43325754900068963, 0.417432483055725, 0.37833136631959036, 0.43947993212701897, 0.4446524983622663, 0.4197600281286437, 0.4123873856320672, 0.3976750733176202]}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (fr)", "type": "mteb/mtop_domain", "config": "fr", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 88.52489821484498}, {"type": "f1", "value": 88.39995340840026}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (fr)", "type": "mteb/mtop_intent", "config": "fr", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 65.92546194801128}, {"type": "f1", "value": 46.53109996877417}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MasakhaNEWSClassification (fra)", "type": "masakhane/masakhanews", "config": "fra", "split": "test", "revision": "8ccc72e69e65f40c70e117d8b3c08306bb788b60"}, "metrics": [{"type": "accuracy", "value": 75.1658767772512}, {"type": "f1", "value": 71.02734472473519}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MasakhaNEWSClusteringP2P (fra)", "type": "masakhane/masakhanews", "config": "fra", "split": "test", "revision": "8ccc72e69e65f40c70e117d8b3c08306bb788b60"}, "metrics": [{"type": "v_measure", "value": 42.63310602457368}, {"type": "v_measures", "value": [1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835, 1.0, 0.06369976992079955, 0.5883538895545949, 0.18727949529930651, 0.2923221464539835]}, {"type": "v_measure", "value": 36.39726683344252}, {"type": "v_measures", "value": [1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356, 1.0, 0.08178119837705045, 0.23052387187167908, 0.4044733130065427, 0.10308495841685356]}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fr)", "type": "mteb/amazon_massive_intent", "config": "fr", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 66.48285137861467}, {"type": "f1", "value": 64.74690799351637}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fr)", "type": "mteb/amazon_massive_scenario", "config": "fr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 71.47276395427033}, {"type": "f1", "value": 71.9261164692627}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MintakaRetrieval (fr)", "type": "jinaai/mintakaqa", "config": "fr", "split": "test", "revision": "efa78cc2f74bbcd21eff2261f9e13aebe40b814e"}, "metrics": [{"type": "map_at_1", "value": 16.134}, {"type": "map_at_10", "value": 26.040000000000003}, {"type": "map_at_100", "value": 27.233}, {"type": "map_at_1000", "value": 27.315}, {"type": "map_at_20", "value": 26.741999999999997}, {"type": "map_at_3", "value": 23.219}, {"type": "map_at_5", "value": 24.962999999999997}, {"type": "mrr_at_1", "value": 16.134}, {"type": "mrr_at_10", "value": 26.040000000000003}, {"type": "mrr_at_100", "value": 27.233}, {"type": "mrr_at_1000", "value": 27.315}, {"type": "mrr_at_20", "value": 26.741999999999997}, {"type": "mrr_at_3", "value": 23.219}, {"type": "mrr_at_5", "value": 24.962999999999997}, {"type": "ndcg_at_1", "value": 16.134}, {"type": "ndcg_at_10", "value": 31.255}, {"type": "ndcg_at_100", "value": 37.462}, {"type": "ndcg_at_1000", "value": 39.85}, {"type": "ndcg_at_20", "value": 33.853}, {"type": "ndcg_at_3", "value": 25.513}, {"type": "ndcg_at_5", "value": 28.653000000000002}, {"type": "precision_at_1", "value": 16.134}, {"type": "precision_at_10", "value": 4.779}, {"type": "precision_at_100", "value": 0.777}, {"type": "precision_at_1000", "value": 0.097}, {"type": "precision_at_20", "value": 2.907}, {"type": "precision_at_3", "value": 10.715}, {"type": "precision_at_5", "value": 7.951999999999999}, {"type": "recall_at_1", "value": 16.134}, {"type": "recall_at_10", "value": 47.789}, {"type": "recall_at_100", "value": 77.682}, {"type": "recall_at_1000", "value": 96.929}, {"type": "recall_at_20", "value": 58.148999999999994}, {"type": "recall_at_3", "value": 32.146}, {"type": "recall_at_5", "value": 39.762}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB OpusparcusPC (fr)", "type": "GEM/opusparcus", "config": "fr", "split": "test", "revision": "9e9b1f8ef51616073f47f306f7f47dd91663f86a"}, "metrics": [{"type": "cos_sim_accuracy", "value": 84.40054495912807}, {"type": "cos_sim_ap", "value": 93.71617772707746}, {"type": "cos_sim_f1", "value": 89.00624099855978}, {"type": "cos_sim_precision", "value": 86.15241635687732}, {"type": "cos_sim_recall", "value": 92.05561072492551}, {"type": "dot_accuracy", "value": 82.35694822888283}, {"type": "dot_ap", "value": 92.22992449042768}, {"type": "dot_f1", "value": 87.84786641929499}, {"type": "dot_precision", "value": 82.4194952132289}, {"type": "dot_recall", "value": 94.04170804369414}, {"type": "euclidean_accuracy", "value": 82.90190735694823}, {"type": "euclidean_ap", "value": 93.27345126956494}, {"type": "euclidean_f1", "value": 87.82608695652175}, {"type": "euclidean_precision", "value": 85.51269990592662}, {"type": "euclidean_recall", "value": 90.26812313803376}, {"type": "manhattan_accuracy", "value": 82.9700272479564}, {"type": "manhattan_ap", "value": 93.34994137379041}, {"type": "manhattan_f1", "value": 87.776708373436}, {"type": "manhattan_precision", "value": 85.15406162464986}, {"type": "manhattan_recall", "value": 90.56603773584906}, {"type": "max_accuracy", "value": 84.40054495912807}, {"type": "max_ap", "value": 93.71617772707746}, {"type": "max_f1", "value": 89.00624099855978}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB PawsX (fr)", "type": "paws-x", "config": "fr", "split": "test", "revision": "8a04d940a42cd40658986fdd8e3da561533a3646"}, "metrics": [{"type": "cos_sim_accuracy", "value": 63.849999999999994}, {"type": "cos_sim_ap", "value": 65.42493744372587}, {"type": "cos_sim_f1", "value": 63.87434554973822}, {"type": "cos_sim_precision", "value": 52.69978401727862}, {"type": "cos_sim_recall", "value": 81.06312292358804}, {"type": "dot_accuracy", "value": 55.35}, {"type": "dot_ap", "value": 48.9364958676423}, {"type": "dot_f1", "value": 62.491349480968864}, {"type": "dot_precision", "value": 45.44539506794162}, {"type": "dot_recall", "value": 100.0}, {"type": "euclidean_accuracy", "value": 64.4}, {"type": "euclidean_ap", "value": 65.8622099022063}, {"type": "euclidean_f1", "value": 63.762044407205686}, {"type": "euclidean_precision", "value": 51.280323450134766}, {"type": "euclidean_recall", "value": 84.27464008859357}, {"type": "manhattan_accuracy", "value": 64.5}, {"type": "manhattan_ap", "value": 65.89565256625798}, {"type": "manhattan_f1", "value": 63.75364128173118}, {"type": "manhattan_precision", "value": 51.06666666666667}, {"type": "manhattan_recall", "value": 84.82834994462901}, {"type": "max_accuracy", "value": 64.5}, {"type": "max_ap", "value": 65.89565256625798}, {"type": "max_f1", "value": 63.87434554973822}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICKFr", "type": "Lajavaness/SICK-fr", "config": "default", "split": "test", "revision": "e077ab4cf4774a1e36d86d593b150422fafd8e8a"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.55191450239424}, {"type": "cos_sim_spearman", "value": 71.89209513298714}, {"type": "euclidean_pearson", "value": 74.18063891200164}, {"type": "euclidean_spearman", "value": 69.61463203410928}, {"type": "manhattan_pearson", "value": 74.26272426503743}, {"type": "manhattan_spearman", "value": 69.64261630235363}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr)", "type": "mteb/sts22-crosslingual-sts", "config": "fr", "split": "test", "revision": "eea2b4fe26a775864c896887d910b76a8098ad3f"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.18370529169849}, {"type": "cos_sim_spearman", "value": 80.80074537316342}, {"type": "euclidean_pearson", "value": 72.62308682855334}, {"type": "euclidean_spearman", "value": 75.64665618431559}, {"type": "manhattan_pearson", "value": 72.75806349827452}, {"type": "manhattan_spearman", "value": 75.34151992740627}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmarkMultilingualSTS (fr)", "type": "PhilipMay/stsb_multi_mt", "config": "fr", "split": "test", "revision": "93d57ef91790589e3ce9c365164337a8a78b7632"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.29611168887834}, {"type": "cos_sim_spearman", "value": 80.23434765396613}, {"type": "euclidean_pearson", "value": 77.85740285296822}, {"type": "euclidean_spearman", "value": 78.42089083386267}, {"type": "manhattan_pearson", "value": 77.85850984492824}, {"type": "manhattan_spearman", "value": 78.42578976788568}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEvalFr", "type": "lyon-nlp/summarization-summeval-fr-p2p", "config": "default", "split": "test", "revision": "b385812de6a9577b6f4d0f88c6a6e35395a94054"}, "metrics": [{"type": "cos_sim_pearson", "value": 31.143644380271375}, {"type": "cos_sim_spearman", "value": 32.45645175142292}, {"type": "dot_pearson", "value": 28.46685825407204}, {"type": "dot_spearman", "value": 29.164040487051512}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SyntecReranking", "type": "lyon-nlp/mteb-fr-reranking-syntec-s2p", "config": "default", "split": "test", "revision": "b205c5084a0934ce8af14338bf03feb19499c84d"}, "metrics": [{"type": "map", "value": 82.65}, {"type": "mrr", "value": 82.65}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SyntecRetrieval", "type": "lyon-nlp/mteb-fr-retrieval-syntec-s2p", "config": "default", "split": "test", "revision": "19661ccdca4dfc2d15122d776b61685f48c68ca9"}, "metrics": [{"type": "map_at_1", "value": 56.99999999999999}, {"type": "map_at_10", "value": 70.6}, {"type": "map_at_100", "value": 70.814}, {"type": "map_at_1000", "value": 70.814}, {"type": "map_at_20", "value": 70.733}, {"type": "map_at_3", "value": 67.833}, {"type": "map_at_5", "value": 70.18299999999999}, {"type": "mrr_at_1", "value": 56.99999999999999}, {"type": "mrr_at_10", "value": 70.6}, {"type": "mrr_at_100", "value": 70.814}, {"type": "mrr_at_1000", "value": 70.814}, {"type": "mrr_at_20", "value": 70.733}, {"type": "mrr_at_3", "value": 67.833}, {"type": "mrr_at_5", "value": 70.18299999999999}, {"type": "ndcg_at_1", "value": 56.99999999999999}, {"type": "ndcg_at_10", "value": 76.626}, {"type": "ndcg_at_100", "value": 77.69500000000001}, {"type": "ndcg_at_1000", "value": 77.69500000000001}, {"type": "ndcg_at_20", "value": 77.12400000000001}, {"type": "ndcg_at_3", "value": 71.464}, {"type": "ndcg_at_5", "value": 75.639}, {"type": "precision_at_1", "value": 56.99999999999999}, {"type": "precision_at_10", "value": 9.5}, {"type": "precision_at_100", "value": 1.0}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_20", "value": 4.8500000000000005}, {"type": "precision_at_3", "value": 27.333000000000002}, {"type": "precision_at_5", "value": 18.4}, {"type": "recall_at_1", "value": 56.99999999999999}, {"type": "recall_at_10", "value": 95.0}, {"type": "recall_at_100", "value": 100.0}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_20", "value": 97.0}, {"type": "recall_at_3", "value": 82.0}, {"type": "recall_at_5", "value": 92.0}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB XPQARetrieval (fr)", "type": "jinaai/xpqa", "config": "fr", "split": "test", "revision": "c99d599f0a6ab9b85b065da6f9d94f9cf731679f"}, "metrics": [{"type": "map_at_1", "value": 39.217}, {"type": "map_at_10", "value": 60.171}, {"type": "map_at_100", "value": 61.736999999999995}, {"type": "map_at_1000", "value": 61.787000000000006}, {"type": "map_at_20", "value": 61.211000000000006}, {"type": "map_at_3", "value": 53.43}, {"type": "map_at_5", "value": 57.638}, {"type": "mrr_at_1", "value": 62.617}, {"type": "mrr_at_10", "value": 69.32300000000001}, {"type": "mrr_at_100", "value": 69.95400000000001}, {"type": "mrr_at_1000", "value": 69.968}, {"type": "mrr_at_20", "value": 69.77799999999999}, {"type": "mrr_at_3", "value": 67.423}, {"type": "mrr_at_5", "value": 68.445}, {"type": "ndcg_at_1", "value": 62.617}, {"type": "ndcg_at_10", "value": 66.55499999999999}, {"type": "ndcg_at_100", "value": 71.521}, {"type": "ndcg_at_1000", "value": 72.32300000000001}, {"type": "ndcg_at_20", "value": 69.131}, {"type": "ndcg_at_3", "value": 60.88099999999999}, {"type": "ndcg_at_5", "value": 62.648}, {"type": "precision_at_1", "value": 62.617}, {"type": "precision_at_10", "value": 15.540999999999999}, {"type": "precision_at_100", "value": 1.9529999999999998}, {"type": "precision_at_1000", "value": 0.20600000000000002}, {"type": "precision_at_20", "value": 8.658000000000001}, {"type": "precision_at_3", "value": 36.805}, {"type": "precision_at_5", "value": 26.622}, {"type": "recall_at_1", "value": 39.217}, {"type": "recall_at_10", "value": 75.547}, {"type": "recall_at_100", "value": 94.226}, {"type": "recall_at_1000", "value": 99.433}, {"type": "recall_at_20", "value": 83.883}, {"type": "recall_at_3", "value": 57.867999999999995}, {"type": "recall_at_5", "value": 66.08800000000001}]}]}]}
manu/sentence_croissant_alpha_v0.3
null
[ "sentence-transformers", "safetensors", "llama", "feature-extraction", "sentence-similarity", "mteb", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:09:06+00:00
[]
[]
TAGS #sentence-transformers #safetensors #llama #feature-extraction #sentence-similarity #mteb #model-index #endpoints_compatible #region-us
# {MODEL_NAME} This is a sentence-transformers model: It maps sentences & paragraphs to a 2048 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 1401 with parameters: Loss: '__main__.MultipleNegativesRankingLoss_with_logging' Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 2048 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1401 with parameters:\n\n\nLoss:\n\n'__main__.MultipleNegativesRankingLoss_with_logging' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #llama #feature-extraction #sentence-similarity #mteb #model-index #endpoints_compatible #region-us \n", "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 2048 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1401 with parameters:\n\n\nLoss:\n\n'__main__.MultipleNegativesRankingLoss_with_logging' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Likich/grok-finetune-qualcoding
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:10:13+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
stable-baselines3
# **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.25 +/- 0.06", "name": "mean_reward", "verified": false}]}]}]}
ahGadji/a2c-PandaReachDense-v3
null
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-26T13:10:50+00:00
[]
[]
TAGS #stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# A2C Agent playing PandaReachDense-v3 This is a trained model of a A2C agent playing PandaReachDense-v3 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vaatsav06/Llama3_medqa2
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:10:57+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AndersGiovanni/social-llama-3-8b-instructions
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T13:11:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-dpo-full This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.4929 - Rewards/chosen: 21.1860 - Rewards/rejected: 6.2518 - Rewards/accuracies: 0.7344 - Rewards/margins: 14.9342 - Logps/rejected: -256.4154 - Logps/chosen: -241.4075 - Logits/rejected: -2.7091 - Logits/chosen: -2.7366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5187 | 0.21 | 100 | 0.5296 | 19.0644 | 9.0310 | 0.7227 | 10.0334 | -253.6362 | -243.5290 | -2.7384 | -2.7638 | | 0.508 | 0.42 | 200 | 0.5006 | 20.6504 | 7.0237 | 0.7266 | 13.6267 | -255.6435 | -241.9431 | -2.7569 | -2.7826 | | 0.4808 | 0.63 | 300 | 0.4966 | 20.8183 | 6.9540 | 0.7227 | 13.8643 | -255.7132 | -241.7751 | -2.7115 | -2.7378 | | 0.4835 | 0.84 | 400 | 0.4917 | 21.2230 | 6.3692 | 0.7344 | 14.8539 | -256.2980 | -241.3705 | -2.7037 | -2.7315 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "zephyr-7b-dpo-full", "results": []}]}
RikkiXu/zephyr-7b-dpo-full
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T13:13:29+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
zephyr-7b-dpo-full ================== This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the HuggingFaceH4/ultrafeedback\_binarized dataset. It achieves the following results on the evaluation set: * Loss: 0.4929 * Rewards/chosen: 21.1860 * Rewards/rejected: 6.2518 * Rewards/accuracies: 0.7344 * Rewards/margins: 14.9342 * Logps/rejected: -256.4154 * Logps/chosen: -241.4075 * Logits/rejected: -2.7091 * Logits/chosen: -2.7366 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-07 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.1.2+cu118 * Datasets 2.16.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trocr-base-printed_license_plates_ocr This model is a fine-tuned version of [microsoft/trocr-base-printed](https://huggingface.co/microsoft/trocr-base-printed) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1550 - Cer: 0.037 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3034 | 1.0 | 2000 | 0.2454 | 0.0472 | | 0.1451 | 2.0 | 4000 | 0.1550 | 0.037 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"tags": ["generated_from_trainer"], "model-index": [{"name": "trocr-base-printed_license_plates_ocr", "results": []}]}
artbreguez/trocr-base-printed_license_plates_ocr
null
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:14:14+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #vision-encoder-decoder #generated_from_trainer #endpoints_compatible #region-us
trocr-base-printed\_license\_plates\_ocr ======================================== This model is a fine-tuned version of microsoft/trocr-base-printed on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1550 * Cer: 0.037 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.30.0 * Pytorch 2.2.2+cu121 * Datasets 2.19.0 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #vision-encoder-decoder #generated_from_trainer #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3" ]
text-generation
mlx
# mlx-community/Qwen1.5-110B-4bit This model was converted to MLX format from [`Qwen/Qwen1.5-110B`]() using mlx-lm version **0.12.0**. Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-110B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Qwen1.5-110B-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "other", "tags": ["pretrained", "mlx"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/Qwen1.5-110B/blob/main/LICENSE", "pipeline_tag": "text-generation"}
mlx-community/Qwen1.5-110B-4bit
null
[ "mlx", "safetensors", "qwen2", "pretrained", "text-generation", "conversational", "en", "license:other", "region:us" ]
null
2024-04-26T13:19:33+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #qwen2 #pretrained #text-generation #conversational #en #license-other #region-us
# mlx-community/Qwen1.5-110B-4bit This model was converted to MLX format from ['Qwen/Qwen1.5-110B']() using mlx-lm version 0.12.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/Qwen1.5-110B-4bit\nThis model was converted to MLX format from ['Qwen/Qwen1.5-110B']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #qwen2 #pretrained #text-generation #conversational #en #license-other #region-us \n", "# mlx-community/Qwen1.5-110B-4bit\nThis model was converted to MLX format from ['Qwen/Qwen1.5-110B']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-generation
mlx
# mlx-community/Qwen1.5-110B-8bit This model was converted to MLX format from [`Qwen/Qwen1.5-110B`]() using mlx-lm version **0.12.0**. Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-110B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Qwen1.5-110B-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "other", "tags": ["pretrained", "mlx"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/Qwen1.5-110B/blob/main/LICENSE", "pipeline_tag": "text-generation"}
mlx-community/Qwen1.5-110B-8bit
null
[ "mlx", "safetensors", "qwen2", "pretrained", "text-generation", "conversational", "en", "license:other", "region:us" ]
null
2024-04-26T13:19:47+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #qwen2 #pretrained #text-generation #conversational #en #license-other #region-us
# mlx-community/Qwen1.5-110B-8bit This model was converted to MLX format from ['Qwen/Qwen1.5-110B']() using mlx-lm version 0.12.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/Qwen1.5-110B-8bit\nThis model was converted to MLX format from ['Qwen/Qwen1.5-110B']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #qwen2 #pretrained #text-generation #conversational #en #license-other #region-us \n", "# mlx-community/Qwen1.5-110B-8bit\nThis model was converted to MLX format from ['Qwen/Qwen1.5-110B']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "296.63 +/- 17.46", "name": "mean_reward", "verified": false}]}]}]}
AndrewBJ/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-26T13:21:18+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results-Meta-Llama-3-8B-tagllm-pos-1-reserved-unsloth This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9788 | 0.2 | 162 | 2.0304 | | 1.7172 | 0.4 | 324 | 1.8871 | | 1.9543 | 0.6 | 486 | 1.8420 | | 2.2679 | 0.8 | 648 | 1.8056 | | 1.6227 | 1.0 | 810 | 1.7917 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "unsloth", "generated_from_trainer"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "model-index": [{"name": "results-Meta-Llama-3-8B-tagllm-pos-1-reserved-unsloth", "results": []}]}
AlienKevin/Meta-Llama-3-8B-tagllm-pos-1-reserved-unsloth
null
[ "peft", "safetensors", "trl", "sft", "unsloth", "generated_from_trainer", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:llama2", "region:us" ]
null
2024-04-26T13:21:43+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/llama-3-8b-bnb-4bit #license-llama2 #region-us
results-Meta-Llama-3-8B-tagllm-pos-1-reserved-unsloth ===================================================== This model is a fine-tuned version of unsloth/llama-3-8b-bnb-4bit on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.7917 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 12 * eval\_batch\_size: 12 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 10 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.1 * Pytorch 2.2.1 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/llama-3-8b-bnb-4bit #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
mlx
# mlx-community/Qwen1.5-110B-Chat-4bit This model was converted to MLX format from [`Qwen/Qwen1.5-110B-Chat`]() using mlx-lm version **0.12.0**. Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-110B-Chat) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Qwen1.5-110B-Chat-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "other", "tags": ["chat", "mlx"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation"}
mlx-community/Qwen1.5-110B-Chat-4bit
null
[ "mlx", "safetensors", "qwen2", "chat", "text-generation", "conversational", "en", "license:other", "region:us" ]
null
2024-04-26T13:22:28+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #qwen2 #chat #text-generation #conversational #en #license-other #region-us
# mlx-community/Qwen1.5-110B-Chat-4bit This model was converted to MLX format from ['Qwen/Qwen1.5-110B-Chat']() using mlx-lm version 0.12.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/Qwen1.5-110B-Chat-4bit\nThis model was converted to MLX format from ['Qwen/Qwen1.5-110B-Chat']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #qwen2 #chat #text-generation #conversational #en #license-other #region-us \n", "# mlx-community/Qwen1.5-110B-Chat-4bit\nThis model was converted to MLX format from ['Qwen/Qwen1.5-110B-Chat']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-generation
mlx
# mlx-community/Qwen1.5-110B-Chat-8bit This model was converted to MLX format from [`Qwen/Qwen1.5-110B-Chat`]() using mlx-lm version **0.12.0**. Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-110B-Chat) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Qwen1.5-110B-Chat-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "other", "tags": ["chat", "mlx"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation"}
mlx-community/Qwen1.5-110B-Chat-8bit
null
[ "mlx", "safetensors", "qwen2", "chat", "text-generation", "conversational", "en", "license:other", "region:us" ]
null
2024-04-26T13:22:53+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #qwen2 #chat #text-generation #conversational #en #license-other #region-us
# mlx-community/Qwen1.5-110B-Chat-8bit This model was converted to MLX format from ['Qwen/Qwen1.5-110B-Chat']() using mlx-lm version 0.12.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/Qwen1.5-110B-Chat-8bit\nThis model was converted to MLX format from ['Qwen/Qwen1.5-110B-Chat']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #qwen2 #chat #text-generation #conversational #en #license-other #region-us \n", "# mlx-community/Qwen1.5-110B-Chat-8bit\nThis model was converted to MLX format from ['Qwen/Qwen1.5-110B-Chat']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-instruct-v0.2-bnb-4bit1024 This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6953 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8431 | 0.02 | 25 | 1.4131 | | 0.8021 | 0.04 | 50 | 0.7911 | | 0.7972 | 0.05 | 75 | 0.7886 | | 0.7886 | 0.07 | 100 | 0.7780 | | 0.7762 | 0.09 | 125 | 0.7546 | | 0.7338 | 0.11 | 150 | 0.7332 | | 0.707 | 0.12 | 175 | 0.7399 | | 0.7252 | 0.14 | 200 | 0.7303 | | 0.7513 | 0.16 | 225 | 0.7384 | | 0.7275 | 0.18 | 250 | 0.7380 | | 0.7283 | 0.19 | 275 | 0.7285 | | 0.7132 | 0.21 | 300 | 0.7452 | | 0.7273 | 0.23 | 325 | 0.7370 | | 0.7353 | 0.25 | 350 | 0.7388 | | 0.7457 | 0.27 | 375 | 0.7292 | | 0.7404 | 0.28 | 400 | 0.7315 | | 0.7312 | 0.3 | 425 | 0.7341 | | 0.7285 | 0.32 | 450 | 0.7277 | | 0.7331 | 0.34 | 475 | 0.7318 | | 0.7179 | 0.35 | 500 | 0.7401 | | 0.7432 | 0.37 | 525 | 0.7399 | | 0.7305 | 0.39 | 550 | 0.7463 | | 0.723 | 0.41 | 575 | 0.7448 | | 0.7303 | 0.42 | 600 | 0.7339 | | 0.7213 | 0.44 | 625 | 0.7320 | | 0.7236 | 0.46 | 650 | 0.7378 | | 0.7263 | 0.48 | 675 | 0.7451 | | 0.7462 | 0.5 | 700 | 0.7238 | | 0.7287 | 0.51 | 725 | 0.7274 | | 0.7364 | 0.53 | 750 | 0.7369 | | 0.7276 | 0.55 | 775 | 0.7282 | | 0.7268 | 0.57 | 800 | 0.7431 | | 0.7382 | 0.58 | 825 | 0.7376 | | 0.7185 | 0.6 | 850 | 0.7402 | | 0.7153 | 0.62 | 875 | 0.7362 | | 0.7314 | 0.64 | 900 | 0.7395 | | 0.7465 | 0.65 | 925 | 0.7378 | | 0.7228 | 0.67 | 950 | 0.7333 | | 0.7336 | 0.69 | 975 | 0.7337 | | 0.72 | 0.71 | 1000 | 0.7313 | | 0.7258 | 0.73 | 1025 | 0.7379 | | 0.7312 | 0.74 | 1050 | 0.7342 | | 0.7268 | 0.76 | 1075 | 0.7350 | | 0.7137 | 0.78 | 1100 | 0.7401 | | 0.7277 | 0.8 | 1125 | 0.7277 | | 0.7314 | 0.81 | 1150 | 0.7388 | | 0.7106 | 0.83 | 1175 | 0.7371 | | 0.7226 | 0.85 | 1200 | 0.7326 | | 0.7262 | 0.87 | 1225 | 0.7328 | | 0.7356 | 0.88 | 1250 | 0.7408 | | 0.7245 | 0.9 | 1275 | 0.7365 | | 0.7221 | 0.92 | 1300 | 0.7404 | | 0.7194 | 0.94 | 1325 | 0.7418 | | 0.7209 | 0.96 | 1350 | 0.7380 | | 0.7205 | 0.97 | 1375 | 0.7279 | | 0.6788 | 0.99 | 1400 | 0.6953 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "unsloth", "unsloth", "unsloth", "generated_from_trainer"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "model-index": [{"name": "mistral-7b-instruct-v0.2-bnb-4bit1024", "results": []}]}
12yuens2/hotpotqa-unsloth-mistral-7b-4bit-1024
null
[ "peft", "safetensors", "trl", "sft", "unsloth", "generated_from_trainer", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "region:us" ]
null
2024-04-26T13:26:20+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #region-us
mistral-7b-instruct-v0.2-bnb-4bit1024 ===================================== This model is a fine-tuned version of unsloth/mistral-7b-instruct-v0.2-bnb-4bit on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.6953 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.05 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.1 * Datasets 2.18.0 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.1" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the eli5_category dataset. It achieves the following results on the evaluation set: - Loss: 0.0890 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0259 | 1.0 | 957 | 0.0873 | | 0.0102 | 2.0 | 1914 | 0.0855 | | 0.0026 | 3.0 | 2871 | 0.0890 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "my_awesome_eli5_clm-model", "results": []}]}
mikaya-vu/my_awesome_eli5_clm-model
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "dataset:eli5_category", "base_model:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:26:41+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #dataset-eli5_category #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
my\_awesome\_eli5\_clm-model ============================ This model is a fine-tuned version of google-bert/bert-base-uncased on the eli5\_category dataset. It achieves the following results on the evaluation set: * Loss: 0.0890 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.41.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #dataset-eli5_category #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
diffusers
# Marigold Normals (LCM) Model Card This model belongs to the family of diffusion-based Marigold models for solving various computer vision tasks. The Marigold Normals model focuses on the surface normals task. It takes an input image and computes surface normals in each pixel. The LCM stands for Latent Consistency Models, which is a technique for making the diffusion model fast. The Marigold Normals model is trained from Stable Diffusion with synthetic data, and the LCM model is further fine-tuned from it. Thanks to the rich visual knowledge stored in Stable Diffusion, Marigold models possess deep scene understanding and excel at solving computer vision tasks. Read more about Marigold in our paper titled "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation". [![Website](doc/badges/badge-website.svg)](https://marigoldmonodepth.github.io) [![GitHub](https://img.shields.io/github/stars/prs-eth/Marigold?style=default&label=GitHub%20★&logo=github)](https://github.com/prs-eth/Marigold) [![Paper](doc/badges/badge-pdf.svg)](https://arxiv.org/abs/2312.02145) [![Hugging Face Space](https://img.shields.io/badge/🤗%20Hugging%20Face-Space-yellow)](https://huggingface.co/spaces/toshas/marigold) Developed by: [Bingxin Ke](http://www.kebingxin.com/), [Anton Obukhov](https://www.obukhov.ai/), [Shengyu Huang](https://shengyuh.github.io/), [Nando Metzger](https://nandometzger.github.io/), [Rodrigo Caye Daudt](https://rcdaudt.github.io/), [Konrad Schindler](https://scholar.google.com/citations?user=FZuNgqIAAAAJ&hl=en) ![teaser](doc/teaser_collage_transparant.png) ## 🎓 Citation ```bibtex @InProceedings{ke2023repurposing, title={Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation}, author={Bingxin Ke and Anton Obukhov and Shengyu Huang and Nando Metzger and Rodrigo Caye Daudt and Konrad Schindler}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2024} } ``` ## 🎫 License This work is licensed under the Apache License, Version 2.0 (as defined in the [LICENSE](LICENSE.txt)). By downloading and using the code and model you agree to the terms in the [LICENSE](LICENSE.txt). [![License](https://img.shields.io/badge/License-Apache--2.0-929292)](https://www.apache.org/licenses/LICENSE-2.0)
{"language": ["en"], "license": "apache-2.0", "tags": ["monocular normals estimation", "single image normals estimation", "normals", "in-the-wild", "zero-shot", "LCM"], "pipeline_tag": "normals-estimation"}
prs-eth/marigold-normals-lcm-v0-1
null
[ "diffusers", "safetensors", "monocular normals estimation", "single image normals estimation", "normals", "in-the-wild", "zero-shot", "LCM", "normals-estimation", "en", "arxiv:2312.02145", "license:apache-2.0", "diffusers:MarigoldPipeline", "region:us" ]
null
2024-04-26T13:27:15+00:00
[ "2312.02145" ]
[ "en" ]
TAGS #diffusers #safetensors #monocular normals estimation #single image normals estimation #normals #in-the-wild #zero-shot #LCM #normals-estimation #en #arxiv-2312.02145 #license-apache-2.0 #diffusers-MarigoldPipeline #region-us
# Marigold Normals (LCM) Model Card This model belongs to the family of diffusion-based Marigold models for solving various computer vision tasks. The Marigold Normals model focuses on the surface normals task. It takes an input image and computes surface normals in each pixel. The LCM stands for Latent Consistency Models, which is a technique for making the diffusion model fast. The Marigold Normals model is trained from Stable Diffusion with synthetic data, and the LCM model is further fine-tuned from it. Thanks to the rich visual knowledge stored in Stable Diffusion, Marigold models possess deep scene understanding and excel at solving computer vision tasks. Read more about Marigold in our paper titled "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation". ![Website](URL) ![GitHub](URL ![Paper](URL ![Hugging Face Space](URL Developed by: Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, Konrad Schindler !teaser ## Citation ## License This work is licensed under the Apache License, Version 2.0 (as defined in the LICENSE). By downloading and using the code and model you agree to the terms in the LICENSE. ![License](URL
[ "# Marigold Normals (LCM) Model Card\n\nThis model belongs to the family of diffusion-based Marigold models for solving various computer vision tasks.\nThe Marigold Normals model focuses on the surface normals task.\nIt takes an input image and computes surface normals in each pixel.\nThe LCM stands for Latent Consistency Models, which is a technique for making the diffusion model fast.\nThe Marigold Normals model is trained from Stable Diffusion with synthetic data, and the LCM model is further fine-tuned from it.\nThanks to the rich visual knowledge stored in Stable Diffusion, Marigold models possess deep scene understanding and excel at solving computer vision tasks.\nRead more about Marigold in our paper titled \"Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation\".\n\n![Website](URL)\n![GitHub](URL\n![Paper](URL\n![Hugging Face Space](URL\n\nDeveloped by:\nBingxin Ke,\nAnton Obukhov,\nShengyu Huang,\nNando Metzger,\nRodrigo Caye Daudt,\nKonrad Schindler\n\n!teaser", "## Citation", "## License\n\nThis work is licensed under the Apache License, Version 2.0 (as defined in the LICENSE).\n\nBy downloading and using the code and model you agree to the terms in the LICENSE.\n\n![License](URL" ]
[ "TAGS\n#diffusers #safetensors #monocular normals estimation #single image normals estimation #normals #in-the-wild #zero-shot #LCM #normals-estimation #en #arxiv-2312.02145 #license-apache-2.0 #diffusers-MarigoldPipeline #region-us \n", "# Marigold Normals (LCM) Model Card\n\nThis model belongs to the family of diffusion-based Marigold models for solving various computer vision tasks.\nThe Marigold Normals model focuses on the surface normals task.\nIt takes an input image and computes surface normals in each pixel.\nThe LCM stands for Latent Consistency Models, which is a technique for making the diffusion model fast.\nThe Marigold Normals model is trained from Stable Diffusion with synthetic data, and the LCM model is further fine-tuned from it.\nThanks to the rich visual knowledge stored in Stable Diffusion, Marigold models possess deep scene understanding and excel at solving computer vision tasks.\nRead more about Marigold in our paper titled \"Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation\".\n\n![Website](URL)\n![GitHub](URL\n![Paper](URL\n![Hugging Face Space](URL\n\nDeveloped by:\nBingxin Ke,\nAnton Obukhov,\nShengyu Huang,\nNando Metzger,\nRodrigo Caye Daudt,\nKonrad Schindler\n\n!teaser", "## Citation", "## License\n\nThis work is licensed under the Apache License, Version 2.0 (as defined in the LICENSE).\n\nBy downloading and using the code and model you agree to the terms in the LICENSE.\n\n![License](URL" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-text-to-sql This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "code-llama-7b-text-to-sql", "results": []}]}
oukwuaba/code-llama-7b-text-to-sql
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-04-26T13:28:44+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us
# code-llama-7b-text-to-sql This model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
[ "# code-llama-7b-text-to-sql\n\nThis model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us \n", "# code-llama-7b-text-to-sql\n\nThis model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers"}
MD1998/chating_beginners_v1
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T13:30:33+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
Just a simple modal using Yolov8 for Image Classification task on the dataset IP102 with 20 classes extracted based on image amount.
{"license": "apache-2.0"}
Khieminem/ip102-yolov8-imgcls
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2024-04-26T13:30:40+00:00
[]
[]
TAGS #onnx #license-apache-2.0 #region-us
Just a simple modal using Yolov8 for Image Classification task on the dataset IP102 with 20 classes extracted based on image amount.
[]
[ "TAGS\n#onnx #license-apache-2.0 #region-us \n" ]
text-generation
transformers
# finetune chinese Meta Llama3 Instruct 8b with Llama-Factory ``` “top.model_name": "LLaMA3-8B-Chat", "top.finetuning_type": "lora", "top.adapter_path": [], "top.quantization_bit": "none", "top.template": "llama3", "top.rope_scaling": "none", top.booster": "none", "train.training_stage": "Supervised Fine-Tuning", "train.dataset_dir": "data", "train.dataset": [ "alpaca_zh", "alpaca_gpt4_zh", "guanaco", "oaast_sft_zh", "wikipedia_zh" ], top.model_name": "LLaMA3-8B-Chat", "top.finetuning_type": "lora", "top.adapter_path": [], "top.quantization_bit": "none", "top.template": "llama3", "top.rope_scaling": "none", "top.booster": "none", "train.training_stage": "Supervised Fine-Tuning", "train.dataset_dir": "data", "train.dataset": [ "alpaca_zh", "alpaca_gpt4_zh", "guanaco", "nsfc_zh", "oaast_sft_zh", "wikipedia_zh" ], ```
{"license": "apache-2.0"}
pooka74/LLaMA3-8B-Chat-Chinese
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T13:32:01+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# finetune chinese Meta Llama3 Instruct 8b with Llama-Factory
[ "# finetune chinese Meta Llama3 Instruct 8b with Llama-Factory" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# finetune chinese Meta Llama3 Instruct 8b with Llama-Factory" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-31m_mz-131f_IMDB This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-131f_IMDB", "results": []}]}
AlignmentResearch/robust_llm_pythia-31m_mz-131f_IMDB
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T13:33:36+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-31m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-31m_mz-131f_IMDB This model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-31m_mz-131f_IMDB\n\nThis model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-31m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-31m_mz-131f_IMDB\n\nThis model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-dmae-va-U5-100-3i This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5087 - Accuracy: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.9 | 7 | 0.5069 | 0.8333 | | 0.3296 | 1.94 | 15 | 0.5087 | 0.8667 | | 0.2919 | 2.97 | 23 | 0.5190 | 0.8667 | | 0.2572 | 4.0 | 31 | 0.6483 | 0.7667 | | 0.2572 | 4.9 | 38 | 0.5785 | 0.8167 | | 0.2229 | 5.94 | 46 | 0.5932 | 0.8333 | | 0.1799 | 6.97 | 54 | 0.5272 | 0.85 | | 0.1563 | 8.0 | 62 | 0.6124 | 0.85 | | 0.1563 | 8.9 | 69 | 0.6798 | 0.8167 | | 0.125 | 9.94 | 77 | 0.7356 | 0.7833 | | 0.1343 | 10.97 | 85 | 0.5086 | 0.85 | | 0.0906 | 12.0 | 93 | 0.7601 | 0.7667 | | 0.103 | 12.9 | 100 | 0.8084 | 0.8 | | 0.103 | 13.94 | 108 | 0.5612 | 0.85 | | 0.1002 | 14.97 | 116 | 0.6454 | 0.8333 | | 0.1107 | 16.0 | 124 | 0.7783 | 0.8 | | 0.1036 | 16.9 | 131 | 0.7857 | 0.7833 | | 0.1036 | 17.94 | 139 | 0.6504 | 0.8167 | | 0.1248 | 18.97 | 147 | 0.6510 | 0.8167 | | 0.1074 | 20.0 | 155 | 0.7813 | 0.7833 | | 0.1038 | 20.9 | 162 | 0.6553 | 0.8 | | 0.1052 | 21.94 | 170 | 0.6449 | 0.8333 | | 0.1052 | 22.97 | 178 | 0.7444 | 0.8 | | 0.0782 | 24.0 | 186 | 1.0751 | 0.6833 | | 0.0952 | 24.9 | 193 | 0.6453 | 0.8333 | | 0.0803 | 25.94 | 201 | 0.7794 | 0.8 | | 0.0803 | 26.97 | 209 | 0.6160 | 0.8333 | | 0.0947 | 28.0 | 217 | 0.6362 | 0.85 | | 0.0702 | 28.9 | 224 | 0.7610 | 0.8167 | | 0.0737 | 29.94 | 232 | 0.7924 | 0.8167 | | 0.0644 | 30.97 | 240 | 0.9755 | 0.8 | | 0.0644 | 32.0 | 248 | 0.8580 | 0.8333 | | 0.0695 | 32.9 | 255 | 1.1410 | 0.7167 | | 0.09 | 33.94 | 263 | 0.8442 | 0.8 | | 0.0619 | 34.97 | 271 | 1.1689 | 0.7167 | | 0.0619 | 36.0 | 279 | 0.7599 | 0.8333 | | 0.0607 | 36.9 | 286 | 0.8498 | 0.8167 | | 0.0509 | 37.94 | 294 | 0.8331 | 0.85 | | 0.0666 | 38.97 | 302 | 0.8166 | 0.8167 | | 0.0615 | 40.0 | 310 | 0.9394 | 0.7667 | | 0.0615 | 40.9 | 317 | 0.8837 | 0.8 | | 0.0503 | 41.94 | 325 | 0.8208 | 0.8333 | | 0.0431 | 42.97 | 333 | 1.1271 | 0.75 | | 0.0548 | 44.0 | 341 | 0.9044 | 0.7833 | | 0.0548 | 44.9 | 348 | 0.9017 | 0.8 | | 0.0414 | 45.94 | 356 | 1.1390 | 0.75 | | 0.0609 | 46.97 | 364 | 0.8937 | 0.8 | | 0.0556 | 48.0 | 372 | 0.8459 | 0.8 | | 0.0556 | 48.9 | 379 | 1.0285 | 0.7667 | | 0.0417 | 49.94 | 387 | 0.7379 | 0.85 | | 0.0409 | 50.97 | 395 | 0.7817 | 0.8333 | | 0.0206 | 52.0 | 403 | 0.7860 | 0.8167 | | 0.0414 | 52.9 | 410 | 0.8414 | 0.8167 | | 0.0414 | 53.94 | 418 | 0.8657 | 0.8 | | 0.0329 | 54.97 | 426 | 0.8824 | 0.8 | | 0.0394 | 56.0 | 434 | 0.7990 | 0.8333 | | 0.0373 | 56.9 | 441 | 0.8101 | 0.8167 | | 0.0373 | 57.94 | 449 | 0.8535 | 0.8 | | 0.0418 | 58.97 | 457 | 0.9149 | 0.8167 | | 0.0365 | 60.0 | 465 | 0.9278 | 0.8 | | 0.0367 | 60.9 | 472 | 0.9064 | 0.8 | | 0.0355 | 61.94 | 480 | 0.9610 | 0.7833 | | 0.0355 | 62.97 | 488 | 0.9174 | 0.8167 | | 0.0492 | 64.0 | 496 | 0.9877 | 0.7667 | | 0.0326 | 64.9 | 503 | 1.0192 | 0.7833 | | 0.0233 | 65.94 | 511 | 0.9588 | 0.8 | | 0.0233 | 66.97 | 519 | 0.9829 | 0.7833 | | 0.0251 | 68.0 | 527 | 1.0540 | 0.7667 | | 0.0283 | 68.9 | 534 | 1.0556 | 0.7667 | | 0.0307 | 69.94 | 542 | 1.0036 | 0.7833 | | 0.0319 | 70.97 | 550 | 0.9294 | 0.8 | | 0.0319 | 72.0 | 558 | 1.0077 | 0.8 | | 0.0246 | 72.9 | 565 | 1.0298 | 0.7833 | | 0.0205 | 73.94 | 573 | 1.0041 | 0.7833 | | 0.0345 | 74.97 | 581 | 0.9182 | 0.7833 | | 0.0345 | 76.0 | 589 | 0.9054 | 0.8333 | | 0.0181 | 76.9 | 596 | 0.9338 | 0.8333 | | 0.0287 | 77.94 | 604 | 0.9678 | 0.7833 | | 0.0268 | 78.97 | 612 | 0.9841 | 0.7833 | | 0.0293 | 80.0 | 620 | 1.0380 | 0.7667 | | 0.0293 | 80.9 | 627 | 1.0837 | 0.7833 | | 0.0222 | 81.94 | 635 | 1.0132 | 0.7667 | | 0.033 | 82.97 | 643 | 0.9785 | 0.8 | | 0.0227 | 84.0 | 651 | 0.9848 | 0.8 | | 0.0227 | 84.9 | 658 | 0.9780 | 0.8 | | 0.0295 | 85.94 | 666 | 0.9613 | 0.8167 | | 0.0291 | 86.97 | 674 | 0.9753 | 0.8167 | | 0.031 | 88.0 | 682 | 0.9831 | 0.8 | | 0.031 | 88.9 | 689 | 0.9820 | 0.8 | | 0.0233 | 89.94 | 697 | 0.9793 | 0.8 | | 0.0195 | 90.32 | 700 | 0.9788 | 0.8 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "vit-base-patch16-224-dmae-va-U5-100-3i", "results": []}]}
Augusto777/vit-base-patch16-224-dmae-va-U5-100-3i
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:34:06+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
vit-base-patch16-224-dmae-va-U5-100-3i ====================================== This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.5087 * Accuracy: 0.8667 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.05 * num\_epochs: 100 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu118 * Datasets 2.16.1 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
sentence-similarity
sentence-transformers
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 31889 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters: ``` {'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 3188, "evaluator": "utils.ToponymResolutionEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
dguzh/geo-all-MiniLM-L6-v2
null
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:35:46+00:00
[]
[]
TAGS #sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us
# {MODEL_NAME} This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 31889 with parameters: Loss: 'sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss' with parameters: Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 31889 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n", "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 31889 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b1-finetuned-cityscapes-1024-1024-straighter-only-test This model is a fine-tuned version of [nvidia/segformer-b1-finetuned-cityscapes-1024-1024](https://huggingface.co/nvidia/segformer-b1-finetuned-cityscapes-1024-1024) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0319 - Mean Iou: 0.9378 - Mean Accuracy: 0.9615 - Overall Accuracy: 0.9895 - Accuracy Default: 1e-06 - Accuracy Pipe: 0.8987 - Accuracy Floor: 0.9897 - Accuracy Background: 0.9959 - Iou Default: 1e-06 - Iou Pipe: 0.8434 - Iou Floor: 0.9813 - Iou Background: 0.9889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Default | Accuracy Pipe | Accuracy Floor | Accuracy Background | Iou Default | Iou Pipe | Iou Floor | Iou Background | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------:|:-------------:|:--------------:|:-------------------:|:-----------:|:--------:|:---------:|:--------------:| | 0.3904 | 1.0 | 36 | 0.1465 | 0.8037 | 0.8484 | 0.9645 | 1e-06 | 0.5855 | 0.9696 | 0.9900 | 1e-06 | 0.5120 | 0.9355 | 0.9635 | | 0.1244 | 2.0 | 72 | 0.0891 | 0.8640 | 0.9024 | 0.9766 | 1e-06 | 0.7371 | 0.9764 | 0.9938 | 1e-06 | 0.6565 | 0.9592 | 0.9762 | | 0.0818 | 3.0 | 108 | 0.0669 | 0.8868 | 0.9178 | 0.9804 | 1e-06 | 0.7826 | 0.9745 | 0.9965 | 1e-06 | 0.7154 | 0.9657 | 0.9793 | | 0.061 | 4.0 | 144 | 0.0525 | 0.9072 | 0.9407 | 0.9839 | 1e-06 | 0.8472 | 0.9801 | 0.9949 | 1e-06 | 0.7675 | 0.9711 | 0.9830 | | 0.051 | 5.0 | 180 | 0.0470 | 0.9118 | 0.9444 | 0.9849 | 1e-06 | 0.8585 | 0.9790 | 0.9958 | 1e-06 | 0.7789 | 0.9722 | 0.9845 | | 0.0461 | 6.0 | 216 | 0.0424 | 0.9191 | 0.9510 | 0.9861 | 1e-06 | 0.8736 | 0.9851 | 0.9944 | 1e-06 | 0.7959 | 0.9762 | 0.9851 | | 0.0388 | 7.0 | 252 | 0.0401 | 0.9184 | 0.9443 | 0.9862 | 1e-06 | 0.8508 | 0.9862 | 0.9960 | 1e-06 | 0.7932 | 0.9769 | 0.9853 | | 0.0348 | 8.0 | 288 | 0.0372 | 0.9244 | 0.9565 | 0.9870 | 1e-06 | 0.8894 | 0.9859 | 0.9943 | 1e-06 | 0.8104 | 0.9763 | 0.9865 | | 0.0324 | 9.0 | 324 | 0.0362 | 0.9237 | 0.9486 | 0.9870 | 1e-06 | 0.8656 | 0.9833 | 0.9969 | 1e-06 | 0.8076 | 0.9773 | 0.9861 | | 0.031 | 10.0 | 360 | 0.0349 | 0.9239 | 0.9520 | 0.9872 | 1e-06 | 0.8737 | 0.9870 | 0.9954 | 1e-06 | 0.8067 | 0.9788 | 0.9863 | | 0.0287 | 11.0 | 396 | 0.0333 | 0.9285 | 0.9531 | 0.9877 | 1e-06 | 0.8720 | 0.9930 | 0.9944 | 1e-06 | 0.8209 | 0.9778 | 0.9868 | | 0.0268 | 12.0 | 432 | 0.0332 | 0.9283 | 0.9522 | 0.9879 | 1e-06 | 0.8737 | 0.9865 | 0.9966 | 1e-06 | 0.8191 | 0.9787 | 0.9872 | | 0.025 | 13.0 | 468 | 0.0311 | 0.9317 | 0.9622 | 0.9883 | 1e-06 | 0.9042 | 0.9877 | 0.9945 | 1e-06 | 0.8281 | 0.9794 | 0.9877 | | 0.0247 | 14.0 | 504 | 0.0310 | 0.9308 | 0.9535 | 0.9884 | 1e-06 | 0.8742 | 0.9904 | 0.9959 | 1e-06 | 0.8247 | 0.9801 | 0.9876 | | 0.0236 | 15.0 | 540 | 0.0307 | 0.9322 | 0.9538 | 0.9886 | 1e-06 | 0.8755 | 0.9897 | 0.9963 | 1e-06 | 0.8292 | 0.9793 | 0.9880 | | 0.0223 | 16.0 | 576 | 0.0301 | 0.9346 | 0.9633 | 0.9888 | 1e-06 | 0.9083 | 0.9861 | 0.9955 | 1e-06 | 0.8360 | 0.9791 | 0.9886 | | 0.0208 | 17.0 | 612 | 0.0308 | 0.9326 | 0.9578 | 0.9887 | 1e-06 | 0.8876 | 0.9907 | 0.9953 | 1e-06 | 0.8300 | 0.9797 | 0.9882 | | 0.0198 | 18.0 | 648 | 0.0295 | 0.9339 | 0.9589 | 0.9888 | 1e-06 | 0.8897 | 0.9921 | 0.9949 | 1e-06 | 0.8335 | 0.9799 | 0.9882 | | 0.0194 | 19.0 | 684 | 0.0311 | 0.9315 | 0.9524 | 0.9886 | 1e-06 | 0.8712 | 0.9894 | 0.9967 | 1e-06 | 0.8265 | 0.9802 | 0.9878 | | 0.0188 | 20.0 | 720 | 0.0299 | 0.9332 | 0.9558 | 0.9888 | 1e-06 | 0.8807 | 0.9906 | 0.9959 | 1e-06 | 0.8318 | 0.9796 | 0.9882 | | 0.0187 | 21.0 | 756 | 0.0298 | 0.9344 | 0.9567 | 0.9890 | 1e-06 | 0.8833 | 0.9905 | 0.9961 | 1e-06 | 0.8339 | 0.9810 | 0.9883 | | 0.0179 | 22.0 | 792 | 0.0304 | 0.9334 | 0.9566 | 0.9889 | 1e-06 | 0.8834 | 0.9904 | 0.9959 | 1e-06 | 0.8317 | 0.9804 | 0.9882 | | 0.0174 | 23.0 | 828 | 0.0301 | 0.9350 | 0.9603 | 0.9890 | 1e-06 | 0.8960 | 0.9895 | 0.9955 | 1e-06 | 0.8364 | 0.9803 | 0.9884 | | 0.017 | 24.0 | 864 | 0.0294 | 0.9352 | 0.9589 | 0.9890 | 1e-06 | 0.8925 | 0.9877 | 0.9963 | 1e-06 | 0.8371 | 0.9802 | 0.9883 | | 0.0172 | 25.0 | 900 | 0.0322 | 0.9334 | 0.9555 | 0.9888 | 1e-06 | 0.8796 | 0.9908 | 0.9960 | 1e-06 | 0.8320 | 0.9799 | 0.9882 | | 0.0165 | 26.0 | 936 | 0.0312 | 0.9331 | 0.9556 | 0.9888 | 1e-06 | 0.8813 | 0.9891 | 0.9964 | 1e-06 | 0.8318 | 0.9792 | 0.9884 | | 0.0162 | 27.0 | 972 | 0.0296 | 0.9350 | 0.9589 | 0.9891 | 1e-06 | 0.8911 | 0.9899 | 0.9959 | 1e-06 | 0.8360 | 0.9806 | 0.9885 | | 0.0155 | 28.0 | 1008 | 0.0314 | 0.9359 | 0.9578 | 0.9892 | 1e-06 | 0.8880 | 0.9890 | 0.9965 | 1e-06 | 0.8384 | 0.9808 | 0.9884 | | 0.0154 | 29.0 | 1044 | 0.0291 | 0.9379 | 0.9637 | 0.9894 | 1e-06 | 0.9061 | 0.9898 | 0.9952 | 1e-06 | 0.8438 | 0.9812 | 0.9887 | | 0.0151 | 30.0 | 1080 | 0.0289 | 0.9372 | 0.9620 | 0.9893 | 1e-06 | 0.8994 | 0.9912 | 0.9952 | 1e-06 | 0.8419 | 0.9810 | 0.9887 | | 0.0152 | 31.0 | 1116 | 0.0310 | 0.9365 | 0.9573 | 0.9893 | 1e-06 | 0.8865 | 0.9884 | 0.9969 | 1e-06 | 0.8397 | 0.9815 | 0.9884 | | 0.0143 | 32.0 | 1152 | 0.0307 | 0.9376 | 0.9614 | 0.9894 | 1e-06 | 0.8983 | 0.9904 | 0.9956 | 1e-06 | 0.8433 | 0.9809 | 0.9887 | | 0.0138 | 33.0 | 1188 | 0.0295 | 0.9385 | 0.9623 | 0.9896 | 1e-06 | 0.9004 | 0.9910 | 0.9955 | 1e-06 | 0.8451 | 0.9814 | 0.9889 | | 0.0149 | 34.0 | 1224 | 0.0308 | 0.9380 | 0.9617 | 0.9894 | 1e-06 | 0.9007 | 0.9883 | 0.9961 | 1e-06 | 0.8444 | 0.9809 | 0.9886 | | 0.0138 | 35.0 | 1260 | 0.0304 | 0.9376 | 0.9616 | 0.9894 | 1e-06 | 0.8993 | 0.9899 | 0.9958 | 1e-06 | 0.8431 | 0.9809 | 0.9888 | | 0.0138 | 36.0 | 1296 | 0.0299 | 0.9379 | 0.9598 | 0.9895 | 1e-06 | 0.8932 | 0.9901 | 0.9962 | 1e-06 | 0.8433 | 0.9816 | 0.9887 | | 0.0139 | 37.0 | 1332 | 0.0298 | 0.9378 | 0.9615 | 0.9895 | 1e-06 | 0.8983 | 0.9903 | 0.9958 | 1e-06 | 0.8435 | 0.9812 | 0.9889 | | 0.0133 | 38.0 | 1368 | 0.0293 | 0.9393 | 0.9624 | 0.9897 | 1e-06 | 0.9008 | 0.9906 | 0.9958 | 1e-06 | 0.8467 | 0.9823 | 0.9889 | | 0.0131 | 39.0 | 1404 | 0.0318 | 0.9368 | 0.9592 | 0.9893 | 1e-06 | 0.8922 | 0.9893 | 0.9963 | 1e-06 | 0.8406 | 0.9814 | 0.9884 | | 0.0129 | 40.0 | 1440 | 0.0303 | 0.9382 | 0.9627 | 0.9895 | 1e-06 | 0.9034 | 0.9890 | 0.9958 | 1e-06 | 0.8447 | 0.9813 | 0.9887 | | 0.0126 | 41.0 | 1476 | 0.0304 | 0.9392 | 0.9631 | 0.9896 | 1e-06 | 0.9037 | 0.9901 | 0.9956 | 1e-06 | 0.8471 | 0.9818 | 0.9887 | | 0.0126 | 42.0 | 1512 | 0.0311 | 0.9378 | 0.9595 | 0.9895 | 1e-06 | 0.8929 | 0.9892 | 0.9965 | 1e-06 | 0.8432 | 0.9817 | 0.9887 | | 0.0125 | 43.0 | 1548 | 0.0314 | 0.9383 | 0.9611 | 0.9895 | 1e-06 | 0.8974 | 0.9899 | 0.9960 | 1e-06 | 0.8453 | 0.9809 | 0.9888 | | 0.0129 | 44.0 | 1584 | 0.0319 | 0.9374 | 0.9585 | 0.9895 | 1e-06 | 0.8886 | 0.9904 | 0.9964 | 1e-06 | 0.8420 | 0.9816 | 0.9887 | | 0.0127 | 45.0 | 1620 | 0.0313 | 0.9380 | 0.9594 | 0.9895 | 1e-06 | 0.8920 | 0.9900 | 0.9964 | 1e-06 | 0.8436 | 0.9816 | 0.9887 | | 0.0127 | 46.0 | 1656 | 0.0321 | 0.9379 | 0.9626 | 0.9895 | 1e-06 | 0.9029 | 0.9893 | 0.9957 | 1e-06 | 0.8444 | 0.9805 | 0.9890 | | 0.0121 | 47.0 | 1692 | 0.0321 | 0.9377 | 0.9599 | 0.9895 | 1e-06 | 0.8930 | 0.9907 | 0.9960 | 1e-06 | 0.8430 | 0.9813 | 0.9888 | | 0.0115 | 48.0 | 1728 | 0.0305 | 0.9390 | 0.9633 | 0.9897 | 1e-06 | 0.9043 | 0.9900 | 0.9957 | 1e-06 | 0.8463 | 0.9817 | 0.9890 | | 0.0118 | 49.0 | 1764 | 0.0319 | 0.9378 | 0.9615 | 0.9895 | 1e-06 | 0.8987 | 0.9897 | 0.9959 | 1e-06 | 0.8434 | 0.9813 | 0.9889 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1 - Datasets 2.15.0 - Tokenizers 0.15.0
{"license": "other", "tags": ["generated_from_trainer"], "base_model": "nvidia/segformer-b1-finetuned-cityscapes-1024-1024", "model-index": [{"name": "segformer-b1-finetuned-cityscapes-1024-1024-straighter-only-test", "results": []}]}
selvaa/segformer-b1-finetuned-cityscapes-1024-1024-straighter-only-test
null
[ "transformers", "tensorboard", "safetensors", "segformer", "generated_from_trainer", "base_model:nvidia/segformer-b1-finetuned-cityscapes-1024-1024", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:40:44+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #segformer #generated_from_trainer #base_model-nvidia/segformer-b1-finetuned-cityscapes-1024-1024 #license-other #endpoints_compatible #region-us
segformer-b1-finetuned-cityscapes-1024-1024-straighter-only-test ================================================================ This model is a fine-tuned version of nvidia/segformer-b1-finetuned-cityscapes-1024-1024 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0319 * Mean Iou: 0.9378 * Mean Accuracy: 0.9615 * Overall Accuracy: 0.9895 * Accuracy Default: 1e-06 * Accuracy Pipe: 0.8987 * Accuracy Floor: 0.9897 * Accuracy Background: 0.9959 * Iou Default: 1e-06 * Iou Pipe: 0.8434 * Iou Floor: 0.9813 * Iou Background: 0.9889 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 3 * eval\_batch\_size: 3 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 60 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.35.2 * Pytorch 2.0.1 * Datasets 2.15.0 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.0.1\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #tensorboard #safetensors #segformer #generated_from_trainer #base_model-nvidia/segformer-b1-finetuned-cityscapes-1024-1024 #license-other #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.0.1\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "262.29 +/- 23.03", "name": "mean_reward", "verified": false}]}]}]}
mosterdslop/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-26T13:41:28+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-sft-qlora-re This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 100 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "llama3-8b-sft-qlora-re", "results": []}]}
xahilmalik/llama3-8b-sft-qlora-re
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "region:us" ]
null
2024-04-26T13:41:30+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us
# llama3-8b-sft-qlora-re This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 100 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# llama3-8b-sft-qlora-re\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 100", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us \n", "# llama3-8b-sft-qlora-re\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 100", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
sentence-similarity
sentence-transformers
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 31889 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters: ``` {'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 3188, "evaluator": "utils.ToponymResolutionEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
dguzh/geo-all-distilroberta-v1
null
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:42:11+00:00
[]
[]
TAGS #sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us
# {MODEL_NAME} This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 31889 with parameters: Loss: 'sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss' with parameters: Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 31889 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n", "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 31889 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dsfdsf2/distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8581 - Validation Loss: 3.6729 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.8581 | 3.6729 | 0 | ### Framework versions - Transformers 4.40.1 - TensorFlow 2.16.1 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilgpt2", "model-index": [{"name": "dsfdsf2/distilgpt2-finetuned-wikitext2", "results": []}]}
dsfdsf2/distilgpt2-finetuned-wikitext2
null
[ "transformers", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T13:45:07+00:00
[]
[]
TAGS #transformers #tf #tensorboard #gpt2 #text-generation #generated_from_keras_callback #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
dsfdsf2/distilgpt2-finetuned-wikitext2 ====================================== This model is a fine-tuned version of distilgpt2 on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 3.8581 * Validation Loss: 3.6729 * Epoch: 0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': 2e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.40.1 * TensorFlow 2.16.1 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.16.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tf #tensorboard #gpt2 #text-generation #generated_from_keras_callback #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.16.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/chargoddard/llama3-42b-v0 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/llama3-42b-v0-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ1_S.gguf) | i1-IQ1_S | 9.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ1_M.gguf) | i1-IQ1_M | 10.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.0 | | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 13.2 | | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ2_S.gguf) | i1-IQ2_S | 13.9 | | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ2_M.gguf) | i1-IQ2_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-Q2_K.gguf) | i1-Q2_K | 16.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 17.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 19.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ3_S.gguf) | i1-IQ3_S | 19.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ3_M.gguf) | i1-IQ3_M | 19.7 | | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 21.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 22.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-Q4_0.gguf) | i1-Q4_0 | 24.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 24.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 26.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 30.7 | | | [GGUF](https://huggingface.co/mradermacher/llama3-42b-v0-i1-GGUF/resolve/main/llama3-42b-v0.i1-Q6_K.gguf) | i1-Q6_K | 35.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["axolotl", "mergekit", "llama"], "datasets": ["JeanKaddour/minipile"], "base_model": "chargoddard/llama3-42b-v0", "quantized_by": "mradermacher"}
mradermacher/llama3-42b-v0-i1-GGUF
null
[ "transformers", "gguf", "axolotl", "mergekit", "llama", "en", "dataset:JeanKaddour/minipile", "base_model:chargoddard/llama3-42b-v0", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:46:07+00:00
[]
[ "en" ]
TAGS #transformers #gguf #axolotl #mergekit #llama #en #dataset-JeanKaddour/minipile #base_model-chargoddard/llama3-42b-v0 #license-llama3 #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #axolotl #mergekit #llama #en #dataset-JeanKaddour/minipile #base_model-chargoddard/llama3-42b-v0 #license-llama3 #endpoints_compatible #region-us \n" ]
null
transformers
# itayl/Hebrew-Mistral-7B-Q5_K_M-GGUF This model was converted to GGUF format from [`yam-peleg/Hebrew-Mistral-7B`](https://huggingface.co/yam-peleg/Hebrew-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/yam-peleg/Hebrew-Mistral-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo itayl/Hebrew-Mistral-7B-Q5_K_M-GGUF --model hebrew-mistral-7b.Q5_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo itayl/Hebrew-Mistral-7B-Q5_K_M-GGUF --model hebrew-mistral-7b.Q5_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m hebrew-mistral-7b.Q5_K_M.gguf -n 128 ```
{"language": ["en", "he"], "license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]}
itayl/Hebrew-Mistral-7B-Q5_K_M-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "en", "he", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:46:49+00:00
[]
[ "en", "he" ]
TAGS #transformers #gguf #llama-cpp #gguf-my-repo #en #he #license-apache-2.0 #endpoints_compatible #region-us
# itayl/Hebrew-Mistral-7B-Q5_K_M-GGUF This model was converted to GGUF format from 'yam-peleg/Hebrew-Mistral-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# itayl/Hebrew-Mistral-7B-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'yam-peleg/Hebrew-Mistral-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #en #he #license-apache-2.0 #endpoints_compatible #region-us \n", "# itayl/Hebrew-Mistral-7B-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'yam-peleg/Hebrew-Mistral-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Likich/llama3-finetune-qualcoding
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:47:16+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Uploaded model - **Developed by:** richie-ghost - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
richie-ghost/llama-3b-unsloth-quantized_merged
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:48:53+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: richie-ghost - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: richie-ghost\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: richie-ghost\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # job_postings_mlm_model_450k This model is a fine-tuned version of [giyoung-kwon-0902/job_postings_mlm_model_400k](https://huggingface.co/giyoung-kwon-0902/job_postings_mlm_model_400k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1113 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.153 | 1.0 | 17544 | 0.1361 | | 0.1215 | 2.0 | 35088 | 0.1113 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "giyoung-kwon-0902/job_postings_mlm_model_400k", "model-index": [{"name": "job_postings_mlm_model_450k", "results": []}]}
giyoung-kwon-0902/job_postings_mlm_model_450k
null
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:giyoung-kwon-0902/job_postings_mlm_model_400k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:49:01+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-giyoung-kwon-0902/job_postings_mlm_model_400k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
job\_postings\_mlm\_model\_450k =============================== This model is a fine-tuned version of giyoung-kwon-0902/job\_postings\_mlm\_model\_400k on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1113 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-giyoung-kwon-0902/job_postings_mlm_model_400k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pesc101/Mistral-7B-Instruct-v0.2-lbl-2x
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T13:49:53+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Zangs3011/llama3_8B_norobots <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/llama3_8B_norobots-GGUF/resolve/main/llama3_8B_norobots.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": [], "base_model": "Zangs3011/llama3_8B_norobots", "quantized_by": "mradermacher"}
mradermacher/llama3_8B_norobots-GGUF
null
[ "transformers", "gguf", "en", "base_model:Zangs3011/llama3_8B_norobots", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:54:10+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-Zangs3011/llama3_8B_norobots #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-Zangs3011/llama3_8B_norobots #endpoints_compatible #region-us \n" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/aipib/sakana-dareties2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/sakana-dareties2-GGUF/resolve/main/sakana-dareties2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "stabilityai/japanese-stablelm-base-gamma-7b", "augmxnt/shisa-gamma-7b-v1"], "base_model": "aipib/sakana-dareties2", "quantized_by": "mradermacher"}
mradermacher/sakana-dareties2-GGUF
null
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "stabilityai/japanese-stablelm-base-gamma-7b", "augmxnt/shisa-gamma-7b-v1", "en", "base_model:aipib/sakana-dareties2", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:54:41+00:00
[]
[ "en" ]
TAGS #transformers #gguf #merge #mergekit #lazymergekit #stabilityai/japanese-stablelm-base-gamma-7b #augmxnt/shisa-gamma-7b-v1 #en #base_model-aipib/sakana-dareties2 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #merge #mergekit #lazymergekit #stabilityai/japanese-stablelm-base-gamma-7b #augmxnt/shisa-gamma-7b-v1 #en #base_model-aipib/sakana-dareties2 #endpoints_compatible #region-us \n" ]
null
null
Just an imatrix quant of https://huggingface.co/jeiku/Fett-uccine_Mini_3B_GGUF to use on non-flagship smartphones.
{}
BlueNipples/Fett-uccine_Mini_3B-q2k-imat_GGUF
null
[ "gguf", "region:us" ]
null
2024-04-26T13:54:52+00:00
[]
[]
TAGS #gguf #region-us
Just an imatrix quant of URL to use on non-flagship smartphones.
[]
[ "TAGS\n#gguf #region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat This model is a fine-tuned version of [sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat](https://huggingface.co/sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 1.1555 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 256 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1709 | 1.0 | 545 | 1.1553 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "alignment-handbook", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k"], "base_model": "sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat", "model-index": [{"name": "sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat", "results": []}]}
sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat-200k
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T13:56:34+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrachat_200k #base_model-sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat ================================================ This model is a fine-tuned version of sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat on the HuggingFaceH4/ultrachat\_200k dataset. It achieves the following results on the evaluation set: * Loss: 1.1555 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * total\_train\_batch\_size: 256 * total\_eval\_batch\_size: 256 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.2+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 256\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrachat_200k #base_model-sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 256\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Rimyy/TentativeGemma1epEv
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:57:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** sravaniayyagari - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b"}
sravaniayyagari/llama3_finetuned_1
null
[ "transformers", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:58:01+00:00
[]
[ "en" ]
TAGS #transformers #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: sravaniayyagari - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: sravaniayyagari\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: sravaniayyagari\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
FounderNest/mistral-7b-instruct-classifier-fit-assessment-finetuned-v3.4
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:58:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RNAMamba-14M-Contrastive This model is a fine-tuned version of [afg1/RNAMamba-14M](https://huggingface.co/afg1/RNAMamba-14M) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "afg1/RNAMamba-14M", "model-index": [{"name": "RNAMamba-14M-Contrastive", "results": []}]}
afg1/RNAMamba-14M-Contrastive
null
[ "transformers", "safetensors", "mamba", "generated_from_trainer", "base_model:afg1/RNAMamba-14M", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T13:58:27+00:00
[]
[]
TAGS #transformers #safetensors #mamba #generated_from_trainer #base_model-afg1/RNAMamba-14M #license-apache-2.0 #endpoints_compatible #region-us
# RNAMamba-14M-Contrastive This model is a fine-tuned version of afg1/RNAMamba-14M on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# RNAMamba-14M-Contrastive\n\nThis model is a fine-tuned version of afg1/RNAMamba-14M on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu118\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mamba #generated_from_trainer #base_model-afg1/RNAMamba-14M #license-apache-2.0 #endpoints_compatible #region-us \n", "# RNAMamba-14M-Contrastive\n\nThis model is a fine-tuned version of afg1/RNAMamba-14M on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu118\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Uploaded model - **Developed by:** richie-ghost - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
richie-ghost/llama-3b-unsloth-quantized_lora
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-26T13:58:58+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
# Uploaded model - Developed by: richie-ghost - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: richie-ghost\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n", "# Uploaded model\n\n- Developed by: richie-ghost\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v2 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 6 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.1 - Pytorch 2.2.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v2", "results": []}]}
NassimB/mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v2
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-26T14:01:24+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
# mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v2 This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 6 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.1 - Pytorch 2.2.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v2\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 6\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.1\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "# mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v2\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 6\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.1\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo cognitivecomputations/dolphin-2.9-llama3-8b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/cognitivecomputations-dolphin-2.9-llama3-8b-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-2.9-llama3-8b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-2.9-llama3-8b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "cognitivecomputations/dolphin-2.9-llama3-8b"}
PrunaAI/cognitivecomputations-dolphin-2.9-llama3-8b-AWQ-4bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T14:04:34+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #conversational #base_model-cognitivecomputations/dolphin-2.9-llama3-8b #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with awq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo cognitivecomputations/dolphin-2.9-llama3-8b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-2.9-llama3-8b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo cognitivecomputations/dolphin-2.9-llama3-8b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-2.9-llama3-8b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #conversational #base_model-cognitivecomputations/dolphin-2.9-llama3-8b #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo cognitivecomputations/dolphin-2.9-llama3-8b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-2.9-llama3-8b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
token-classification
transformers
# SOTA Entity Recognition English Foundation Model by NuMind 🔥 This model provides the best embedding for the Entity Recognition task in English. It is an improved version of the model from our [**paper**](https://arxiv.org/abs/2402.15343). **Checkout other models by NuMind:** * SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1) * SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1) ## About [Roberta-base](https://huggingface.co/roberta-base) fine-tuned on the expanded version of [NuNER data](https://huggingface.co/datasets/numind/NuNER) using contrastive learning from [**NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data**](https://arxiv.org/abs/2402.15343). **Metrics:** Read more about evaluation protocol & datasets in our [NuNER data](https://huggingface.co/datasets/numind/NuNER) using contrastive learning from [**paper**](https://arxiv.org/abs/2402.15343). Here is the aggregated performance of the models over several datasets: k=X means that as training data, we took only X examples for each class, trained the model, and evaluated it on the full test set. | Model | k=1 | k=4 | k=16 | k=64 | |----------|----------|----------|----------|----------| | RoBERTa-base | 24.5 | 44.7 | 58.1 | 65.4 | RoBERTa-base + NER-BERT pre-training | 32.3 | 50.9 | 61.9 | 67.6 | | NuNER v0.1 | 34.3 | 54.6 | 64.0 | 68.7 | | NuNER v1.0 | 39.4 | 59.6 | 67.8 | 71.5 | | **NuNER v2.0** | **43.6** | **61.0** | **68.2** | **72.0** | NuNER v1.0 has similar performance to 7B LLMs (70 times bigger than NuNER v1.0) created specifically for the NER task. Thus NuNER v2.0 should be even better than the 7b LLM. | Model | k=8~16| k=64~128 | |----------|----------|----------| | UniversalNER (7B) | 57.89 ± 4.34 | 71.02 ± 1.53 | | NuNER v1.0 (100M) | 58.75 ± 0.93 | 70.30 ± 0.35 | ## Usage Embeddings can be used out of the box or fine-tuned on specific datasets. Get embeddings: ```python import torch import transformers model = transformers.AutoModel.from_pretrained( 'numind/NuNER-v2.0' ) tokenizer = transformers.AutoTokenizer.from_pretrained( 'numind/NuNER-v2.0' ) text = [ "NuMind is an AI company based in Paris and USA.", "See other models from us on https://huggingface.co/numind" ] encoded_input = tokenizer( text, return_tensors='pt', padding=True, truncation=True ) output = model(**encoded_input) emb = output.last_hidden_state ``` ## Citation ``` @misc{bogdanov2024nuner, title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data}, author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard}, year={2024}, eprint={2402.15343}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": ["en"], "license": "mit", "tags": ["token-classification", "entity-recognition", "foundation-model", "feature-extraction", "RoBERTa", "generic"], "datasets": ["numind/NuNER"], "pipeline_tag": "token-classification", "inference": false}
numind/NuNER-v2.0
null
[ "transformers", "safetensors", "roberta", "feature-extraction", "token-classification", "entity-recognition", "foundation-model", "RoBERTa", "generic", "en", "dataset:numind/NuNER", "arxiv:2402.15343", "license:mit", "region:us" ]
null
2024-04-26T14:06:13+00:00
[ "2402.15343" ]
[ "en" ]
TAGS #transformers #safetensors #roberta #feature-extraction #token-classification #entity-recognition #foundation-model #RoBERTa #generic #en #dataset-numind/NuNER #arxiv-2402.15343 #license-mit #region-us
SOTA Entity Recognition English Foundation Model by NuMind ========================================================== This model provides the best embedding for the Entity Recognition task in English. It is an improved version of the model from our paper. Checkout other models by NuMind: * SOTA Multilingual Entity Recognition Foundation Model: link * SOTA Sentiment Analysis Foundation Model: English, Multilingual About ----- Roberta-base fine-tuned on the expanded version of NuNER data using contrastive learning from NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data. Metrics: Read more about evaluation protocol & datasets in our NuNER data using contrastive learning from paper. Here is the aggregated performance of the models over several datasets: k=X means that as training data, we took only X examples for each class, trained the model, and evaluated it on the full test set. NuNER v1.0 has similar performance to 7B LLMs (70 times bigger than NuNER v1.0) created specifically for the NER task. Thus NuNER v2.0 should be even better than the 7b LLM. Model: UniversalNER (7B), k=8~16: 57.89 ± 4.34, k=64~128: 71.02 ± 1.53 Model: NuNER v1.0 (100M), k=8~16: 58.75 ± 0.93, k=64~128: 70.30 ± 0.35 Usage ----- Embeddings can be used out of the box or fine-tuned on specific datasets. Get embeddings:
[]
[ "TAGS\n#transformers #safetensors #roberta #feature-extraction #token-classification #entity-recognition #foundation-model #RoBERTa #generic #en #dataset-numind/NuNER #arxiv-2402.15343 #license-mit #region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT2WaP This model is a [gpt2](https://huggingface.co/gpt2) model trained from scratch on the War and peace book. It achieves the following results on the evaluation set: - Loss: 9.0987 - Perplexity: 8943.6289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Perplexity | |:-------------:|:-------:|:----:|:---------------:|:----------:| | 10.157 | 0.6897 | 10 | 9.2336 | 10235.7480 | | 9.2581 | 1.3793 | 20 | 8.9452 | 7671.1870 | | 8.8166 | 2.0690 | 30 | 9.4917 | 13248.7207 | | 8.5094 | 2.7586 | 40 | 9.5417 | 13928.9434 | | 8.0914 | 3.4483 | 50 | 9.5507 | 14054.4785 | | 7.663 | 4.1379 | 60 | 9.4760 | 13043.2441 | | 7.3275 | 4.8276 | 70 | 9.3510 | 11510.8203 | | 6.9788 | 5.5172 | 80 | 9.0822 | 8797.7188 | | 6.6639 | 6.2069 | 90 | 8.9803 | 7945.4014 | | 6.3749 | 6.8966 | 100 | 8.6494 | 5706.8130 | | 6.0702 | 7.5862 | 110 | 8.5696 | 5268.9268 | | 5.9107 | 8.2759 | 120 | 8.3612 | 4277.6265 | | 5.6724 | 8.9655 | 130 | 8.4294 | 4579.6484 | | 5.5949 | 9.6552 | 140 | 8.4934 | 4882.4316 | | 5.4904 | 10.3448 | 150 | 8.4683 | 4761.3862 | | 5.3792 | 11.0345 | 160 | 8.4647 | 4744.5381 | | 5.3091 | 11.7241 | 170 | 8.5767 | 5306.3535 | | 5.233 | 12.4138 | 180 | 8.5257 | 5042.5068 | | 5.2252 | 13.1034 | 190 | 8.5328 | 5078.8433 | | 5.1445 | 13.7931 | 200 | 8.5871 | 5361.9390 | | 5.0824 | 14.4828 | 210 | 8.5784 | 5315.4043 | | 5.0272 | 15.1724 | 220 | 8.6434 | 5672.6934 | | 4.979 | 15.8621 | 230 | 8.6836 | 5905.4277 | | 4.924 | 16.5517 | 240 | 8.7112 | 6070.2261 | | 4.9394 | 17.2414 | 250 | 8.7233 | 6144.3931 | | 4.8663 | 17.9310 | 260 | 8.7411 | 6254.5234 | | 4.8599 | 18.6207 | 270 | 8.7824 | 6518.7896 | | 4.8572 | 19.3103 | 280 | 8.8338 | 6862.5586 | | 4.8064 | 20.0 | 290 | 8.7774 | 6485.7441 | | 4.746 | 20.6897 | 300 | 8.8458 | 6944.8892 | | 4.7569 | 21.3793 | 310 | 8.8436 | 6930.1416 | | 4.6954 | 22.0690 | 320 | 8.8618 | 7057.1084 | | 4.7277 | 22.7586 | 330 | 8.8706 | 7119.4478 | | 4.6432 | 23.4483 | 340 | 8.9084 | 7393.6138 | | 4.6032 | 24.1379 | 350 | 8.9111 | 7413.5176 | | 4.6198 | 24.8276 | 360 | 8.9526 | 7728.0210 | | 4.5874 | 25.5172 | 370 | 8.9740 | 7895.1641 | | 4.5455 | 26.2069 | 380 | 8.9365 | 7604.7129 | | 4.5313 | 26.8966 | 390 | 8.9738 | 7893.2969 | | 4.5297 | 27.5862 | 400 | 8.9659 | 7831.8110 | | 4.5279 | 28.2759 | 410 | 8.9914 | 8034.0391 | | 4.4974 | 28.9655 | 420 | 9.0293 | 8344.2529 | | 4.4554 | 29.6552 | 430 | 9.0191 | 8259.1533 | | 4.4651 | 30.3448 | 440 | 9.0236 | 8296.4531 | | 4.4647 | 31.0345 | 450 | 9.0349 | 8391.1279 | | 4.4668 | 31.7241 | 460 | 9.0530 | 8543.8340 | | 4.4264 | 32.4138 | 470 | 9.0722 | 8709.4141 | | 4.4008 | 33.1034 | 480 | 9.0876 | 8844.6104 | | 4.3982 | 33.7931 | 490 | 9.0711 | 8700.4893 | | 4.3846 | 34.4828 | 500 | 9.0894 | 8860.7441 | | 4.3971 | 35.1724 | 510 | 9.0879 | 8847.6973 | | 4.379 | 35.8621 | 520 | 9.0949 | 8909.6025 | | 4.3696 | 36.5517 | 530 | 9.1097 | 9042.2295 | | 4.3447 | 37.2414 | 540 | 9.1007 | 8961.6953 | | 4.3796 | 37.9310 | 550 | 9.0869 | 8839.0781 | | 4.364 | 38.6207 | 560 | 9.0987 | 8943.6289 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "GPT2WaP", "results": []}]}
Kasdeja23/GPT2WaP
null
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T14:06:14+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT2WaP ======= This model is a gpt2 model trained from scratch on the War and peace book. It achieves the following results on the evaluation set: * Loss: 9.0987 * Perplexity: 8943.6289 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 2 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 512 * total\_eval\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 100 * num\_epochs: 40 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.3.0+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 512\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 40\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 512\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 40\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper large urdu - huzaifa This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["ur"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper large urdu - huzaifa", "results": []}]}
huzaifa1117/whisper-large-urdu-3
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ur", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T14:07:49+00:00
[]
[ "ur" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ur #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us
# Whisper large urdu - huzaifa This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# Whisper large urdu - huzaifa\n\nThis model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- training_steps: 1000\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ur #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n", "# Whisper large urdu - huzaifa\n\nThis model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- training_steps: 1000\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v3 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 6 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.1 - Pytorch 2.2.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v3", "results": []}]}
NassimB/mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v3
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-26T14:08:32+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
# mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v3 This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 6 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.1 - Pytorch 2.2.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 6\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.1\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "# mistral-7b-hf-platypus_vxxiii-chat-added_lamini_v3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 6\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.1\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
text-generation
transformers
# MixtureOfPhi3 <p align="center"> <img src="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11201acc-4089-416d-921b-cbd71fbf8ddb_1024x1024.jpeg" width="300" class="center"/> </p> **MixtureOfPhi3** is a Mixure of Experts (MoE) made with the following models using mergekit: * [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) * [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) This has been created using [LazyMergekit-Phi3](https://colab.research.google.com/drive/1Upb8JOAS3-K-iemblew34p9h1H6wtCeU?usp=sharing) This run is only for development purposes, since merging 2 identical models does not bring any performance benefits, but once specialized finetunes of Phi3 models will be available, it will be a starting point for creating MoE from them. ## ©️ Credits * [mlabonne's phixtral](https://huggingface.co/mlabonne/phixtral-4x2_8) where I adapted the inference code to Phi3's architecture. * [mergekit](https://github.com/cg123/mergekit) code which I tweaked to merge Phi3s These have been merged using `cheap_embed` where each model is assigned a vector representation of words - such as experts for scientific work, reasoning, math etc. Try your own in the link above ! ## 🧩 Configuration ```yaml base_model: microsoft/Phi-3-mini-128k-instruct gate_mode: cheap_embed dtype: float16 experts: - source_model: microsoft/Phi-3-mini-128k-instruct positive_prompts: ["research, logic, math, science"] - source_model: microsoft/Phi-3-mini-128k-instruct positive_prompts: ["creative, art"] ``` ## 💻 Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = "paulilioaica/MixtureOfPhi3" tokenizer = AutoTokenizer.from_pretrained(model) model = AutoModelForCausalLM.from_pretrained( model, trust_remote_code=True, ) prompt="How many continents are there?" input = f"<|system|>\nYou are a helpful AI assistant.<|end|>\n<|user|>{prompt}\n<|assistant|>" tokenized_input = tokenizer.encode(input, return_tensors="pt") outputs = model.generate(tokenized_input, max_new_tokens=128, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(tokenizer.decode(outputs[0])) ```
{"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "phi3_mergekit", "microsoft/Phi-3-mini-128k-instruct"], "base_model": ["microsoft/Phi-3-mini-128k-instruct", "microsoft/Phi-3-mini-128k-instruct"]}
paulilioaica/MixtureOfPhi3
null
[ "transformers", "safetensors", "phi3", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "phi3_mergekit", "microsoft/Phi-3-mini-128k-instruct", "conversational", "custom_code", "base_model:microsoft/Phi-3-mini-128k-instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T14:08:38+00:00
[]
[]
TAGS #transformers #safetensors #phi3 #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #phi3_mergekit #microsoft/Phi-3-mini-128k-instruct #conversational #custom_code #base_model-microsoft/Phi-3-mini-128k-instruct #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# MixtureOfPhi3 <p align="center"> <img src="URL width="300" class="center"/> </p> MixtureOfPhi3 is a Mixure of Experts (MoE) made with the following models using mergekit: * Phi-3-mini-128k-instruct * Phi-3-mini-128k-instruct This has been created using LazyMergekit-Phi3 This run is only for development purposes, since merging 2 identical models does not bring any performance benefits, but once specialized finetunes of Phi3 models will be available, it will be a starting point for creating MoE from them. ## ©️ Credits * mlabonne's phixtral where I adapted the inference code to Phi3's architecture. * mergekit code which I tweaked to merge Phi3s These have been merged using 'cheap_embed' where each model is assigned a vector representation of words - such as experts for scientific work, reasoning, math etc. Try your own in the link above ! ## Configuration ## Usage
[ "# MixtureOfPhi3\n\n<p align=\"center\">\n<img src=\"URL width=\"300\" class=\"center\"/>\n</p>\n\n\nMixtureOfPhi3 is a Mixure of Experts (MoE) made with the following models using mergekit:\n* Phi-3-mini-128k-instruct\n* Phi-3-mini-128k-instruct\n\nThis has been created using LazyMergekit-Phi3\n\nThis run is only for development purposes, since merging 2 identical models does not bring any performance benefits, but once specialized finetunes of Phi3 models will be available, it will be a starting point for creating MoE from them.", "## ©️ Credits\n* mlabonne's phixtral where I adapted the inference code to Phi3's architecture.\n* mergekit code which I tweaked to merge Phi3s\n\n\nThese have been merged using 'cheap_embed' where each model is assigned a vector representation of words - such as experts for scientific work, reasoning, math etc.\n\nTry your own in the link above !", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #phi3 #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #phi3_mergekit #microsoft/Phi-3-mini-128k-instruct #conversational #custom_code #base_model-microsoft/Phi-3-mini-128k-instruct #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# MixtureOfPhi3\n\n<p align=\"center\">\n<img src=\"URL width=\"300\" class=\"center\"/>\n</p>\n\n\nMixtureOfPhi3 is a Mixure of Experts (MoE) made with the following models using mergekit:\n* Phi-3-mini-128k-instruct\n* Phi-3-mini-128k-instruct\n\nThis has been created using LazyMergekit-Phi3\n\nThis run is only for development purposes, since merging 2 identical models does not bring any performance benefits, but once specialized finetunes of Phi3 models will be available, it will be a starting point for creating MoE from them.", "## ©️ Credits\n* mlabonne's phixtral where I adapted the inference code to Phi3's architecture.\n* mergekit code which I tweaked to merge Phi3s\n\n\nThese have been merged using 'cheap_embed' where each model is assigned a vector representation of words - such as experts for scientific work, reasoning, math etc.\n\nTry your own in the link above !", "## Configuration", "## Usage" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # esm2_t130_150M-lora-classifier_2024-04-26_10-08-51 This model is a fine-tuned version of [facebook/esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4537 - Accuracy: 0.8984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0008701568055793088 - train_batch_size: 28 - eval_batch_size: 28 - seed: 8893 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6764 | 1.0 | 55 | 0.6794 | 0.5820 | | 0.5521 | 2.0 | 110 | 0.6192 | 0.6777 | | 0.5409 | 3.0 | 165 | 0.5147 | 0.7383 | | 0.5518 | 4.0 | 220 | 0.3518 | 0.8672 | | 0.1386 | 5.0 | 275 | 0.3596 | 0.8574 | | 0.303 | 6.0 | 330 | 0.4030 | 0.8359 | | 0.1962 | 7.0 | 385 | 0.3143 | 0.8848 | | 0.1501 | 8.0 | 440 | 0.3232 | 0.8652 | | 0.2994 | 9.0 | 495 | 0.3014 | 0.8770 | | 0.0914 | 10.0 | 550 | 0.2980 | 0.8887 | | 0.2108 | 11.0 | 605 | 0.2854 | 0.8770 | | 0.2896 | 12.0 | 660 | 0.3684 | 0.8691 | | 0.0818 | 13.0 | 715 | 0.3349 | 0.8828 | | 0.3152 | 14.0 | 770 | 0.3530 | 0.8848 | | 0.0554 | 15.0 | 825 | 0.3371 | 0.8887 | | 0.1928 | 16.0 | 880 | 0.3347 | 0.875 | | 0.2658 | 17.0 | 935 | 0.3765 | 0.8867 | | 0.4242 | 18.0 | 990 | 0.4166 | 0.8945 | | 0.0964 | 19.0 | 1045 | 0.3400 | 0.8945 | | 0.0375 | 20.0 | 1100 | 0.3581 | 0.9004 | | 0.1781 | 21.0 | 1155 | 0.3816 | 0.8848 | | 0.1563 | 22.0 | 1210 | 0.3940 | 0.8867 | | 0.017 | 23.0 | 1265 | 0.4098 | 0.8926 | | 0.1866 | 24.0 | 1320 | 0.4710 | 0.8770 | | 0.0632 | 25.0 | 1375 | 0.4541 | 0.8828 | | 0.1501 | 26.0 | 1430 | 0.4645 | 0.8828 | | 0.109 | 27.0 | 1485 | 0.4434 | 0.8926 | | 0.0353 | 28.0 | 1540 | 0.4264 | 0.8984 | | 0.4502 | 29.0 | 1595 | 0.4479 | 0.8984 | | 0.0341 | 30.0 | 1650 | 0.4537 | 0.8984 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "facebook/esm2_t30_150M_UR50D", "model-index": [{"name": "esm2_t130_150M-lora-classifier_2024-04-26_10-08-51", "results": []}]}
wcvz/esm2_t130_150M-lora-classifier_2024-04-26_10-08-51
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:facebook/esm2_t30_150M_UR50D", "license:mit", "region:us" ]
null
2024-04-26T14:08:51+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-facebook/esm2_t30_150M_UR50D #license-mit #region-us
esm2\_t130\_150M-lora-classifier\_2024-04-26\_10-08-51 ====================================================== This model is a fine-tuned version of facebook/esm2\_t30\_150M\_UR50D on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.4537 * Accuracy: 0.8984 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0008701568055793088 * train\_batch\_size: 28 * eval\_batch\_size: 28 * seed: 8893 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.1 * Datasets 2.16.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0008701568055793088\n* train\\_batch\\_size: 28\n* eval\\_batch\\_size: 28\n* seed: 8893\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-facebook/esm2_t30_150M_UR50D #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0008701568055793088\n* train\\_batch\\_size: 28\n* eval\\_batch\\_size: 28\n* seed: 8893\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
reinforcement-learning
null
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "PixelCopter", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "30.00 +/- 28.77", "name": "mean_reward", "verified": false}]}]}]}
i-pj/PixelCopter
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-26T14:09:19+00:00
[]
[]
TAGS #Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing Pixelcopter-PLE-v0 This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llava-1.5-7b-hf-ft-mix-vsft This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "llava-1.5-7b-hf-ft-mix-vsft", "results": []}]}
Praveen0309/llava-1.5-7b-hf-ft-mix-vsft
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:llava-hf/llava-1.5-7b-hf", "region:us" ]
null
2024-04-26T14:09:47+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #region-us
# llava-1.5-7b-hf-ft-mix-vsft This model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
[ "# llava-1.5-7b-hf-ft-mix-vsft\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #region-us \n", "# llava-1.5-7b-hf-ft-mix-vsft\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1" ]
text-classification
transformers
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.03381425514817238 f1_macro: 0.9910410929202866 f1_micro: 0.9908675799086758 f1_weighted: 0.9908473335613555 precision_macro: 0.9909727371947719 precision_micro: 0.9908675799086758 precision_weighted: 0.9908883151237302 recall_macro: 0.9911698494022667 recall_micro: 0.9908675799086758 recall_weighted: 0.9908675799086758 accuracy: 0.9908675799086758
{"tags": ["autotrain", "text-classification"], "datasets": ["autotrain-pmf0g-rj8fa/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]}
borggAI/alpha-prompt-classification
null
[ "transformers", "safetensors", "distilbert", "text-classification", "autotrain", "dataset:autotrain-pmf0g-rj8fa/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T14:09:49+00:00
[]
[]
TAGS #transformers #safetensors #distilbert #text-classification #autotrain #dataset-autotrain-pmf0g-rj8fa/autotrain-data #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.03381425514817238 f1_macro: 0.9910410929202866 f1_micro: 0.9908675799086758 f1_weighted: 0.9908473335613555 precision_macro: 0.9909727371947719 precision_micro: 0.9908675799086758 precision_weighted: 0.9908883151237302 recall_macro: 0.9911698494022667 recall_micro: 0.9908675799086758 recall_weighted: 0.9908675799086758 accuracy: 0.9908675799086758
[ "# Model Trained Using AutoTrain\n\n- Problem type: Text Classification", "## Validation Metrics\nloss: 0.03381425514817238\n\nf1_macro: 0.9910410929202866\n\nf1_micro: 0.9908675799086758\n\nf1_weighted: 0.9908473335613555\n\nprecision_macro: 0.9909727371947719\n\nprecision_micro: 0.9908675799086758\n\nprecision_weighted: 0.9908883151237302\n\nrecall_macro: 0.9911698494022667\n\nrecall_micro: 0.9908675799086758\n\nrecall_weighted: 0.9908675799086758\n\naccuracy: 0.9908675799086758" ]
[ "TAGS\n#transformers #safetensors #distilbert #text-classification #autotrain #dataset-autotrain-pmf0g-rj8fa/autotrain-data #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\n- Problem type: Text Classification", "## Validation Metrics\nloss: 0.03381425514817238\n\nf1_macro: 0.9910410929202866\n\nf1_micro: 0.9908675799086758\n\nf1_weighted: 0.9908473335613555\n\nprecision_macro: 0.9909727371947719\n\nprecision_micro: 0.9908675799086758\n\nprecision_weighted: 0.9908883151237302\n\nrecall_macro: 0.9911698494022667\n\nrecall_micro: 0.9908675799086758\n\nrecall_weighted: 0.9908675799086758\n\naccuracy: 0.9908675799086758" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
FounderNest/Mistral-7B-Instruct-v0.2-AWQ-classifier-fit-assessment-finetuned-v3.4
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T14:09:55+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-3 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-3", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-3
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T14:11:19+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-3 This model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - fatimaaa1/model2 <Gallery /> ## Model description These are fatimaaa1/model2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: fatimaaa1/model2/vae. ## Trigger words You should use a bussiness card to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](fatimaaa1/model2/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a bussiness card", "widget": []}
fatimaaa1/model2
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-26T14:11:19+00:00
[]
[]
TAGS #diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - fatimaaa1/model2 <Gallery /> ## Model description These are fatimaaa1/model2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: fatimaaa1/model2/vae. ## Trigger words You should use a bussiness card to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - fatimaaa1/model2\n\n<Gallery />", "## Model description\n\nThese are fatimaaa1/model2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: fatimaaa1/model2/vae.", "## Trigger words\n\nYou should use a bussiness card to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - fatimaaa1/model2\n\n<Gallery />", "## Model description\n\nThese are fatimaaa1/model2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: fatimaaa1/model2/vae.", "## Trigger words\n\nYou should use a bussiness card to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ayon128/code-mixed_Banglish_English_0
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T14:11:31+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # privacy-200k-masking This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0949 - eval_overall_precision: 0.9099 - eval_overall_recall: 0.9306 - eval_overall_f1: 0.9201 - eval_overall_accuracy: 0.9692 - eval_ACCOUNTNAME_f1: 0.9863 - eval_ACCOUNTNUMBER_f1: 0.9551 - eval_AGE_f1: 0.9454 - eval_AMOUNT_f1: 0.9481 - eval_BIC_f1: 0.9140 - eval_BITCOINADDRESS_f1: 0.9227 - eval_BUILDINGNUMBER_f1: 0.9056 - eval_CITY_f1: 0.9351 - eval_COMPANYNAME_f1: 0.9621 - eval_COUNTY_f1: 0.9756 - eval_CREDITCARDCVV_f1: 0.9201 - eval_CREDITCARDISSUER_f1: 0.9767 - eval_CREDITCARDNUMBER_f1: 0.8506 - eval_CURRENCY_f1: 0.7277 - eval_CURRENCYCODE_f1: 0.8398 - eval_CURRENCYNAME_f1: 0.1576 - eval_CURRENCYSYMBOL_f1: 0.9216 - eval_DATE_f1: 0.7988 - eval_DOB_f1: 0.6103 - eval_EMAIL_f1: 0.9862 - eval_ETHEREUMADDRESS_f1: 0.9624 - eval_EYECOLOR_f1: 0.9779 - eval_FIRSTNAME_f1: 0.9636 - eval_GENDER_f1: 0.9852 - eval_HEIGHT_f1: 0.9771 - eval_IBAN_f1: 0.9513 - eval_IP_f1: 0.0 - eval_IPV4_f1: 0.8240 - eval_IPV6_f1: 0.7389 - eval_JOBAREA_f1: 0.9713 - eval_JOBTITLE_f1: 0.9819 - eval_JOBTYPE_f1: 0.9743 - eval_LASTNAME_f1: 0.9439 - eval_LITECOINADDRESS_f1: 0.8069 - eval_MAC_f1: 0.9668 - eval_MASKEDNUMBER_f1: 0.8084 - eval_MIDDLENAME_f1: 0.9401 - eval_NEARBYGPSCOORDINATE_f1: 0.9963 - eval_ORDINALDIRECTION_f1: 0.9904 - eval_PASSWORD_f1: 0.9690 - eval_PHONEIMEI_f1: 0.9842 - eval_PHONENUMBER_f1: 0.9690 - eval_PIN_f1: 0.8584 - eval_PREFIX_f1: 0.9594 - eval_SECONDARYADDRESS_f1: 0.9880 - eval_SEX_f1: 0.9952 - eval_SSN_f1: 0.9813 - eval_STATE_f1: 0.9664 - eval_STREET_f1: 0.9607 - eval_TIME_f1: 0.9560 - eval_URL_f1: 0.9866 - eval_USERAGENT_f1: 0.9901 - eval_USERNAME_f1: 0.9743 - eval_VEHICLEVIN_f1: 0.9699 - eval_VEHICLEVRM_f1: 0.9725 - eval_ZIPCODE_f1: 0.9018 - eval_runtime: 3609.2787 - eval_samples_per_second: 17.394 - eval_steps_per_second: 8.697 - epoch: 1.0 - step: 73241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-multilingual-cased", "model-index": [{"name": "privacy-200k-masking", "results": []}]}
taro-pudding/privacy-200k-masking
null
[ "transformers", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T14:11:34+00:00
[]
[]
TAGS #transformers #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# privacy-200k-masking This model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0949 - eval_overall_precision: 0.9099 - eval_overall_recall: 0.9306 - eval_overall_f1: 0.9201 - eval_overall_accuracy: 0.9692 - eval_ACCOUNTNAME_f1: 0.9863 - eval_ACCOUNTNUMBER_f1: 0.9551 - eval_AGE_f1: 0.9454 - eval_AMOUNT_f1: 0.9481 - eval_BIC_f1: 0.9140 - eval_BITCOINADDRESS_f1: 0.9227 - eval_BUILDINGNUMBER_f1: 0.9056 - eval_CITY_f1: 0.9351 - eval_COMPANYNAME_f1: 0.9621 - eval_COUNTY_f1: 0.9756 - eval_CREDITCARDCVV_f1: 0.9201 - eval_CREDITCARDISSUER_f1: 0.9767 - eval_CREDITCARDNUMBER_f1: 0.8506 - eval_CURRENCY_f1: 0.7277 - eval_CURRENCYCODE_f1: 0.8398 - eval_CURRENCYNAME_f1: 0.1576 - eval_CURRENCYSYMBOL_f1: 0.9216 - eval_DATE_f1: 0.7988 - eval_DOB_f1: 0.6103 - eval_EMAIL_f1: 0.9862 - eval_ETHEREUMADDRESS_f1: 0.9624 - eval_EYECOLOR_f1: 0.9779 - eval_FIRSTNAME_f1: 0.9636 - eval_GENDER_f1: 0.9852 - eval_HEIGHT_f1: 0.9771 - eval_IBAN_f1: 0.9513 - eval_IP_f1: 0.0 - eval_IPV4_f1: 0.8240 - eval_IPV6_f1: 0.7389 - eval_JOBAREA_f1: 0.9713 - eval_JOBTITLE_f1: 0.9819 - eval_JOBTYPE_f1: 0.9743 - eval_LASTNAME_f1: 0.9439 - eval_LITECOINADDRESS_f1: 0.8069 - eval_MAC_f1: 0.9668 - eval_MASKEDNUMBER_f1: 0.8084 - eval_MIDDLENAME_f1: 0.9401 - eval_NEARBYGPSCOORDINATE_f1: 0.9963 - eval_ORDINALDIRECTION_f1: 0.9904 - eval_PASSWORD_f1: 0.9690 - eval_PHONEIMEI_f1: 0.9842 - eval_PHONENUMBER_f1: 0.9690 - eval_PIN_f1: 0.8584 - eval_PREFIX_f1: 0.9594 - eval_SECONDARYADDRESS_f1: 0.9880 - eval_SEX_f1: 0.9952 - eval_SSN_f1: 0.9813 - eval_STATE_f1: 0.9664 - eval_STREET_f1: 0.9607 - eval_TIME_f1: 0.9560 - eval_URL_f1: 0.9866 - eval_USERAGENT_f1: 0.9901 - eval_USERNAME_f1: 0.9743 - eval_VEHICLEVIN_f1: 0.9699 - eval_VEHICLEVRM_f1: 0.9725 - eval_ZIPCODE_f1: 0.9018 - eval_runtime: 3609.2787 - eval_samples_per_second: 17.394 - eval_steps_per_second: 8.697 - epoch: 1.0 - step: 73241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# privacy-200k-masking\n\nThis model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0949\n- eval_overall_precision: 0.9099\n- eval_overall_recall: 0.9306\n- eval_overall_f1: 0.9201\n- eval_overall_accuracy: 0.9692\n- eval_ACCOUNTNAME_f1: 0.9863\n- eval_ACCOUNTNUMBER_f1: 0.9551\n- eval_AGE_f1: 0.9454\n- eval_AMOUNT_f1: 0.9481\n- eval_BIC_f1: 0.9140\n- eval_BITCOINADDRESS_f1: 0.9227\n- eval_BUILDINGNUMBER_f1: 0.9056\n- eval_CITY_f1: 0.9351\n- eval_COMPANYNAME_f1: 0.9621\n- eval_COUNTY_f1: 0.9756\n- eval_CREDITCARDCVV_f1: 0.9201\n- eval_CREDITCARDISSUER_f1: 0.9767\n- eval_CREDITCARDNUMBER_f1: 0.8506\n- eval_CURRENCY_f1: 0.7277\n- eval_CURRENCYCODE_f1: 0.8398\n- eval_CURRENCYNAME_f1: 0.1576\n- eval_CURRENCYSYMBOL_f1: 0.9216\n- eval_DATE_f1: 0.7988\n- eval_DOB_f1: 0.6103\n- eval_EMAIL_f1: 0.9862\n- eval_ETHEREUMADDRESS_f1: 0.9624\n- eval_EYECOLOR_f1: 0.9779\n- eval_FIRSTNAME_f1: 0.9636\n- eval_GENDER_f1: 0.9852\n- eval_HEIGHT_f1: 0.9771\n- eval_IBAN_f1: 0.9513\n- eval_IP_f1: 0.0\n- eval_IPV4_f1: 0.8240\n- eval_IPV6_f1: 0.7389\n- eval_JOBAREA_f1: 0.9713\n- eval_JOBTITLE_f1: 0.9819\n- eval_JOBTYPE_f1: 0.9743\n- eval_LASTNAME_f1: 0.9439\n- eval_LITECOINADDRESS_f1: 0.8069\n- eval_MAC_f1: 0.9668\n- eval_MASKEDNUMBER_f1: 0.8084\n- eval_MIDDLENAME_f1: 0.9401\n- eval_NEARBYGPSCOORDINATE_f1: 0.9963\n- eval_ORDINALDIRECTION_f1: 0.9904\n- eval_PASSWORD_f1: 0.9690\n- eval_PHONEIMEI_f1: 0.9842\n- eval_PHONENUMBER_f1: 0.9690\n- eval_PIN_f1: 0.8584\n- eval_PREFIX_f1: 0.9594\n- eval_SECONDARYADDRESS_f1: 0.9880\n- eval_SEX_f1: 0.9952\n- eval_SSN_f1: 0.9813\n- eval_STATE_f1: 0.9664\n- eval_STREET_f1: 0.9607\n- eval_TIME_f1: 0.9560\n- eval_URL_f1: 0.9866\n- eval_USERAGENT_f1: 0.9901\n- eval_USERNAME_f1: 0.9743\n- eval_VEHICLEVIN_f1: 0.9699\n- eval_VEHICLEVRM_f1: 0.9725\n- eval_ZIPCODE_f1: 0.9018\n- eval_runtime: 3609.2787\n- eval_samples_per_second: 17.394\n- eval_steps_per_second: 8.697\n- epoch: 1.0\n- step: 73241", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine_with_restarts\n- lr_scheduler_warmup_ratio: 0.2\n- num_epochs: 2", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# privacy-200k-masking\n\nThis model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0949\n- eval_overall_precision: 0.9099\n- eval_overall_recall: 0.9306\n- eval_overall_f1: 0.9201\n- eval_overall_accuracy: 0.9692\n- eval_ACCOUNTNAME_f1: 0.9863\n- eval_ACCOUNTNUMBER_f1: 0.9551\n- eval_AGE_f1: 0.9454\n- eval_AMOUNT_f1: 0.9481\n- eval_BIC_f1: 0.9140\n- eval_BITCOINADDRESS_f1: 0.9227\n- eval_BUILDINGNUMBER_f1: 0.9056\n- eval_CITY_f1: 0.9351\n- eval_COMPANYNAME_f1: 0.9621\n- eval_COUNTY_f1: 0.9756\n- eval_CREDITCARDCVV_f1: 0.9201\n- eval_CREDITCARDISSUER_f1: 0.9767\n- eval_CREDITCARDNUMBER_f1: 0.8506\n- eval_CURRENCY_f1: 0.7277\n- eval_CURRENCYCODE_f1: 0.8398\n- eval_CURRENCYNAME_f1: 0.1576\n- eval_CURRENCYSYMBOL_f1: 0.9216\n- eval_DATE_f1: 0.7988\n- eval_DOB_f1: 0.6103\n- eval_EMAIL_f1: 0.9862\n- eval_ETHEREUMADDRESS_f1: 0.9624\n- eval_EYECOLOR_f1: 0.9779\n- eval_FIRSTNAME_f1: 0.9636\n- eval_GENDER_f1: 0.9852\n- eval_HEIGHT_f1: 0.9771\n- eval_IBAN_f1: 0.9513\n- eval_IP_f1: 0.0\n- eval_IPV4_f1: 0.8240\n- eval_IPV6_f1: 0.7389\n- eval_JOBAREA_f1: 0.9713\n- eval_JOBTITLE_f1: 0.9819\n- eval_JOBTYPE_f1: 0.9743\n- eval_LASTNAME_f1: 0.9439\n- eval_LITECOINADDRESS_f1: 0.8069\n- eval_MAC_f1: 0.9668\n- eval_MASKEDNUMBER_f1: 0.8084\n- eval_MIDDLENAME_f1: 0.9401\n- eval_NEARBYGPSCOORDINATE_f1: 0.9963\n- eval_ORDINALDIRECTION_f1: 0.9904\n- eval_PASSWORD_f1: 0.9690\n- eval_PHONEIMEI_f1: 0.9842\n- eval_PHONENUMBER_f1: 0.9690\n- eval_PIN_f1: 0.8584\n- eval_PREFIX_f1: 0.9594\n- eval_SECONDARYADDRESS_f1: 0.9880\n- eval_SEX_f1: 0.9952\n- eval_SSN_f1: 0.9813\n- eval_STATE_f1: 0.9664\n- eval_STREET_f1: 0.9607\n- eval_TIME_f1: 0.9560\n- eval_URL_f1: 0.9866\n- eval_USERAGENT_f1: 0.9901\n- eval_USERNAME_f1: 0.9743\n- eval_VEHICLEVIN_f1: 0.9699\n- eval_VEHICLEVRM_f1: 0.9725\n- eval_ZIPCODE_f1: 0.9018\n- eval_runtime: 3609.2787\n- eval_samples_per_second: 17.394\n- eval_steps_per_second: 8.697\n- epoch: 1.0\n- step: 73241", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine_with_restarts\n- lr_scheduler_warmup_ratio: 0.2\n- num_epochs: 2", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
savanladani/nividous-7b-sft-lora
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T14:12:42+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ayon128/code-mixed_Banglish_English_1
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T14:13:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speech_ocean_hubert_mdd This model is a fine-tuned version of [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2027 - Wer: 0.0517 - Cer: 0.0499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-------:|:----:|:---------------:|:------:|:------:| | 42.7069 | 0.9873 | 39 | 36.7247 | 0.9992 | 0.9977 | | 16.2787 | 2.0 | 79 | 7.8315 | 1.0 | 1.0 | | 6.7896 | 2.9873 | 118 | 4.5645 | 1.0 | 1.0 | | 4.0104 | 4.0 | 158 | 3.8654 | 1.0 | 1.0 | | 3.8037 | 4.9873 | 197 | 3.8060 | 1.0 | 1.0 | | 3.7898 | 6.0 | 237 | 3.7695 | 1.0 | 1.0 | | 3.7777 | 6.9873 | 276 | 3.7717 | 1.0 | 1.0 | | 3.7442 | 8.0 | 316 | 3.7320 | 1.0 | 1.0 | | 3.7286 | 8.9873 | 355 | 3.6978 | 1.0 | 1.0 | | 3.6272 | 10.0 | 395 | 3.5089 | 1.0 | 1.0 | | 3.0921 | 10.9873 | 434 | 2.6068 | 0.9992 | 0.9997 | | 2.2556 | 12.0 | 474 | 1.6832 | 0.5880 | 0.6815 | | 1.7791 | 12.9873 | 513 | 1.2117 | 0.3861 | 0.4433 | | 1.2731 | 14.0 | 553 | 0.7338 | 0.1793 | 0.1505 | | 0.9596 | 14.9873 | 592 | 0.4892 | 0.1220 | 0.1005 | | 0.7152 | 16.0 | 632 | 0.3525 | 0.0892 | 0.0752 | | 0.521 | 16.9873 | 671 | 0.2843 | 0.0704 | 0.0623 | | 0.4791 | 18.0 | 711 | 0.2351 | 0.0607 | 0.0568 | | 0.3992 | 18.9873 | 750 | 0.2120 | 0.0547 | 0.0523 | | 0.4245 | 19.7468 | 780 | 0.2027 | 0.0517 | 0.0499 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/hubert-large-ll60k", "model-index": [{"name": "speech_ocean_hubert_mdd", "results": []}]}
nrshoudi/speech_ocean_hubert_mdd
null
[ "transformers", "tensorboard", "safetensors", "hubert", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/hubert-large-ll60k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T14:13:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #hubert #automatic-speech-recognition #generated_from_trainer #base_model-facebook/hubert-large-ll60k #license-apache-2.0 #endpoints_compatible #region-us
speech\_ocean\_hubert\_mdd ========================== This model is a fine-tuned version of facebook/hubert-large-ll60k on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2027 * Wer: 0.0517 * Cer: 0.0499 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 20 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #hubert #automatic-speech-recognition #generated_from_trainer #base_model-facebook/hubert-large-ll60k #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ayon128/code-mixed_Banglish_English_2
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T14:14:20+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
visual-question-answering
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Entreprenerdly/blip2-opt-2.7b-fp16-sharded
null
[ "transformers", "safetensors", "blip-2", "visual-question-answering", "arxiv:1910.09700", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-26T14:17:08+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #blip-2 #visual-question-answering #arxiv-1910.09700 #endpoints_compatible #8-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #blip-2 #visual-question-answering #arxiv-1910.09700 #endpoints_compatible #8-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-dmae-va-U5-100-iN This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6381 - Accuracy: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.9 | 7 | 1.3812 | 0.45 | | 1.3848 | 1.94 | 15 | 1.3606 | 0.5 | | 1.3686 | 2.97 | 23 | 1.3075 | 0.5333 | | 1.2965 | 4.0 | 31 | 1.2370 | 0.4667 | | 1.2965 | 4.9 | 38 | 1.1168 | 0.5333 | | 1.1753 | 5.94 | 46 | 1.0310 | 0.5667 | | 1.0294 | 6.97 | 54 | 0.9316 | 0.6 | | 0.902 | 8.0 | 62 | 0.8728 | 0.6833 | | 0.902 | 8.9 | 69 | 0.8129 | 0.7667 | | 0.7812 | 9.94 | 77 | 0.7006 | 0.8 | | 0.6419 | 10.97 | 85 | 0.6381 | 0.8667 | | 0.5109 | 12.0 | 93 | 0.6327 | 0.8167 | | 0.3838 | 12.9 | 100 | 0.5442 | 0.8667 | | 0.3838 | 13.94 | 108 | 0.6755 | 0.75 | | 0.285 | 14.97 | 116 | 0.7756 | 0.7167 | | 0.2672 | 16.0 | 124 | 0.8107 | 0.7167 | | 0.2466 | 16.9 | 131 | 0.5219 | 0.8333 | | 0.2466 | 17.94 | 139 | 0.7041 | 0.7833 | | 0.2312 | 18.97 | 147 | 0.7879 | 0.75 | | 0.1933 | 20.0 | 155 | 0.7090 | 0.8 | | 0.1692 | 20.9 | 162 | 0.5395 | 0.8333 | | 0.1578 | 21.94 | 170 | 0.6419 | 0.8167 | | 0.1578 | 22.97 | 178 | 0.5736 | 0.8333 | | 0.1321 | 24.0 | 186 | 0.7471 | 0.75 | | 0.1114 | 24.9 | 193 | 0.6447 | 0.7667 | | 0.1385 | 25.94 | 201 | 0.6158 | 0.8167 | | 0.1385 | 26.97 | 209 | 0.6467 | 0.8 | | 0.1136 | 28.0 | 217 | 0.6180 | 0.85 | | 0.0997 | 28.9 | 224 | 0.8578 | 0.75 | | 0.1064 | 29.94 | 232 | 0.6778 | 0.8167 | | 0.0775 | 30.97 | 240 | 0.8124 | 0.8 | | 0.0775 | 32.0 | 248 | 0.7783 | 0.8 | | 0.0921 | 32.9 | 255 | 0.8320 | 0.7333 | | 0.0919 | 33.94 | 263 | 0.8310 | 0.7833 | | 0.0888 | 34.97 | 271 | 0.6576 | 0.85 | | 0.0888 | 36.0 | 279 | 0.7044 | 0.8333 | | 0.0693 | 36.9 | 286 | 0.7608 | 0.8167 | | 0.061 | 37.94 | 294 | 0.7802 | 0.8 | | 0.0699 | 38.97 | 302 | 0.7762 | 0.8167 | | 0.0652 | 40.0 | 310 | 0.7579 | 0.8 | | 0.0652 | 40.9 | 317 | 0.9985 | 0.75 | | 0.0562 | 41.94 | 325 | 0.8027 | 0.8167 | | 0.0534 | 42.97 | 333 | 0.9705 | 0.7833 | | 0.0519 | 44.0 | 341 | 0.7301 | 0.8333 | | 0.0519 | 44.9 | 348 | 0.8433 | 0.8 | | 0.0529 | 45.94 | 356 | 0.8534 | 0.8 | | 0.0772 | 46.97 | 364 | 0.8562 | 0.8 | | 0.0644 | 48.0 | 372 | 0.8419 | 0.8 | | 0.0644 | 48.9 | 379 | 1.1251 | 0.7667 | | 0.0467 | 49.94 | 387 | 0.7537 | 0.8333 | | 0.0576 | 50.97 | 395 | 0.7517 | 0.8333 | | 0.0344 | 52.0 | 403 | 0.8343 | 0.8 | | 0.0663 | 52.9 | 410 | 0.7636 | 0.8 | | 0.0663 | 53.94 | 418 | 0.8253 | 0.8167 | | 0.0353 | 54.97 | 426 | 0.9348 | 0.8 | | 0.0524 | 56.0 | 434 | 0.8217 | 0.8167 | | 0.0479 | 56.9 | 441 | 0.7586 | 0.8167 | | 0.0479 | 57.94 | 449 | 0.8147 | 0.8 | | 0.0595 | 58.97 | 457 | 1.0000 | 0.7833 | | 0.0475 | 60.0 | 465 | 0.9291 | 0.7833 | | 0.049 | 60.9 | 472 | 0.9588 | 0.7833 | | 0.0398 | 61.94 | 480 | 0.9501 | 0.8 | | 0.0398 | 62.97 | 488 | 0.9499 | 0.8 | | 0.0496 | 64.0 | 496 | 0.9279 | 0.8 | | 0.0354 | 64.9 | 503 | 0.9677 | 0.75 | | 0.0325 | 65.94 | 511 | 0.8371 | 0.8333 | | 0.0325 | 66.97 | 519 | 0.9683 | 0.8 | | 0.0335 | 68.0 | 527 | 1.0455 | 0.7833 | | 0.0375 | 68.9 | 534 | 0.9027 | 0.8167 | | 0.0424 | 69.94 | 542 | 0.8043 | 0.85 | | 0.0383 | 70.97 | 550 | 0.9035 | 0.7833 | | 0.0383 | 72.0 | 558 | 0.9360 | 0.7833 | | 0.0295 | 72.9 | 565 | 0.9841 | 0.7833 | | 0.0307 | 73.94 | 573 | 0.9300 | 0.8 | | 0.0376 | 74.97 | 581 | 0.9630 | 0.7833 | | 0.0376 | 76.0 | 589 | 0.9777 | 0.7833 | | 0.0259 | 76.9 | 596 | 0.9323 | 0.8 | | 0.0345 | 77.94 | 604 | 0.9075 | 0.8 | | 0.0346 | 78.97 | 612 | 0.8951 | 0.8 | | 0.0319 | 80.0 | 620 | 0.9676 | 0.8 | | 0.0319 | 80.9 | 627 | 0.9884 | 0.8 | | 0.0226 | 81.94 | 635 | 0.9851 | 0.7833 | | 0.033 | 82.97 | 643 | 0.9710 | 0.7833 | | 0.0262 | 84.0 | 651 | 0.9851 | 0.7833 | | 0.0262 | 84.9 | 658 | 0.9868 | 0.7833 | | 0.0345 | 85.94 | 666 | 0.9702 | 0.7833 | | 0.0299 | 86.97 | 674 | 0.9889 | 0.7833 | | 0.0347 | 88.0 | 682 | 1.0003 | 0.7833 | | 0.0347 | 88.9 | 689 | 0.9913 | 0.7833 | | 0.0288 | 89.94 | 697 | 0.9859 | 0.7833 | | 0.0198 | 90.32 | 700 | 0.9858 | 0.7833 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224", "model-index": [{"name": "vit-base-patch16-224-dmae-va-U5-100-iN", "results": []}]}
Augusto777/vit-base-patch16-224-dmae-va-U5-100-iN
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T14:18:03+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
vit-base-patch16-224-dmae-va-U5-100-iN ====================================== This model is a fine-tuned version of google/vit-base-patch16-224 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.6381 * Accuracy: 0.8667 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.05 * num\_epochs: 100 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu118 * Datasets 2.16.1 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dsfdsf2/distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.1556 - Validation Loss: 1.8940 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.1556 | 1.8940 | 0 | ### Framework versions - Transformers 4.40.1 - TensorFlow 2.16.1 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilroberta-base", "model-index": [{"name": "dsfdsf2/distilroberta-base-finetuned-wikitext2", "results": []}]}
dsfdsf2/distilroberta-base-finetuned-wikitext2
null
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "base_model:distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T14:18:20+00:00
[]
[]
TAGS #transformers #tf #roberta #fill-mask #generated_from_keras_callback #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
dsfdsf2/distilroberta-base-finetuned-wikitext2 ============================================== This model is a fine-tuned version of distilroberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 2.1556 * Validation Loss: 1.8940 * Epoch: 0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': 2e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.40.1 * TensorFlow 2.16.1 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.16.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tf #roberta #fill-mask #generated_from_keras_callback #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.16.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ayon128/code-mixed_Banglish_English_4
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T14:18:24+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ayon128/code-mixed_Banglish_English_3
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T14:19:13+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# sbawa/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q4_K_M-GGUF This model was converted to GGUF format from [`davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo`](https://huggingface.co/davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo sbawa/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q4_K_M-GGUF --model tinyllama-1.1b-chat-v1.0-intel-dpo.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo sbawa/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q4_K_M-GGUF --model tinyllama-1.1b-chat-v1.0-intel-dpo.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-chat-v1.0-intel-dpo.Q4_K_M.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["dpo", "llama-cpp", "gguf-my-repo"], "datasets": ["argilla/distilabel-intel-orca-dpo-pairs"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
sbawa/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q4_K_M-GGUF
null
[ "gguf", "dpo", "llama-cpp", "gguf-my-repo", "en", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-04-26T14:20:02+00:00
[]
[ "en" ]
TAGS #gguf #dpo #llama-cpp #gguf-my-repo #en #dataset-argilla/distilabel-intel-orca-dpo-pairs #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
# sbawa/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q4_K_M-GGUF This model was converted to GGUF format from 'davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# sbawa/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #dpo #llama-cpp #gguf-my-repo #en #dataset-argilla/distilabel-intel-orca-dpo-pairs #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n", "# sbawa/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]