pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma7b This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "google/gemma-7b", "model-index": [{"name": "gemma7b", "results": []}]}
iTia/gemma7b
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:google/gemma-7b", "license:gemma", "region:us" ]
null
2024-04-17T04:47:07+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-7b #license-gemma #region-us
# gemma7b This model is a fine-tuned version of google/gemma-7b on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "# gemma7b\n\nThis model is a fine-tuned version of google/gemma-7b on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 10", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-7b #license-gemma #region-us \n", "# gemma7b\n\nThis model is a fine-tuned version of google/gemma-7b on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 10", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
null
null
![An eagle flying high up in the sky](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F304f2c7a-fc67-4df4-ba57-c6f38f86826c_2688x1536.png) ### RWKV EagleX 7B v2 Model > **!Important!: This is not meant to be used with huggingface transformers library** > [Use the Hugging Face varient instead, found here (v5-EagleX-v2-7B-HF)](https://huggingface.co/RWKV/v5-EagleX-v2-7B-HF) > > The following is the raw representation of the EagleX 7B v2 model. For use with our own set of trainers > > > This is not an instruct tune model! (soon...) ## Quickstart with the hugging face transformer library [See the huggingface version here (v5-EagleX-v2-7B-HF)](huggingface.co/RWKV/v5-EagleX-v2-7B-HF) ``` model = AutoModelForCausalLM.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True).to(torch.float32) tokenizer = AutoTokenizer.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True) ``` ## Evaluation The following shows the progression of the model from 1.1T trained to 2.25T trained. |Model |Eagle-7B-HF|EagleX-7B-HF-v1|EagleX-7B-HF-v2| |----------------------|-----------|---------------|---------------| |Param Count |7.52 B |7.52 B |7.52 B | |Tokens Trained |1.1 T |1.7 T |2.25 T | |avg_acc |0.4822 |0.5391 |0.5495 | |glue (acc) |0.5752 |0.7463 |0.7439 | |anli (acc) |0.3594 |0.4847 |0.5097 | |mnli (acc) |0.3802 |0.7928 |0.7884 | |mnli_mismatch (acc) |0.3687 |0.7985 |0.784 | |swag (acc) |0.568 |0.5814 |0.5905 | |lambada_standard (acc)|0.685 |0.686 |0.7004 | |lambada_openai (acc) |0.7425 |0.7522 |0.7502 | |mmlu (acc) |0.3321 |0.4014 |0.438 | |winogrande (acc) |0.674 |0.7206 |0.7332 | |wnli (acc) |0.4225 |0.4648 |0.493 | |truthfulqa (acc) |0.3303 |0.3268 |0.3401 | |logiqa (acc) |0.2458 |0.2458 |0.2458 | |logiqa2 (acc) |0.2494 |0.2595 |0.2621 | |sciq (acc) |0.955 |0.96 |0.93 | |piqa (acc) |0.7704 |0.7758 |0.7764 | |arc_easy (acc) |0.7382 |0.7555 |0.7445 | |arc_challenge (acc) |0.3951 |0.4087 |0.4155 | |hellaswag (acc) |0.5264 |0.5411 |0.56 | |openbookqa (acc) |0.302 |0.296 |0.304 | |mathqa (acc) |0.26 |0.26 |0.2593 | |arithmetic (acc) |0.245 |0.0634 |0.1703 | Compared against other top performing models in the same weight class. |Model |OLMo-7B |falcon-7b |Llama-2-7b-hf|EagleX-7B-HF-v2|Mistral-7B-v0.1| |----------------------|---------------|----------------|-------------|---------------|---------------| |Param Count |6.89 B |6.92 B |6.74 B |7.52 B |7.24 B | |Tokens Trained |2.5 T |1.5 T |2 T |2.25 T |2 - 7 T? | |avg_acc |0.4578 |0.4775 |0.5045 |0.5495 |0.5676 | |glue (acc) |0.474 |0.4578 |0.4289 |0.7439 |0.515 | |anli (acc) |0.3478 |0.3541 |0.3697 |0.5097 |0.3803 | |mnli (acc) |0.3294 |0.3893 |0.4269 |0.7884 |0.4542 | |mnli_mismatch (acc) |0.3348 |0.404 |0.4395 |0.784 |0.4632 | |swag (acc) |0.5512 |0.5685 |0.5658 |0.5905 |0.5756 | |lambada_standard (acc)|0.6396 |0.6868 |0.6808 |0.7004 |0.6944 | |lambada_openai (acc) |0.6872 |0.746 |0.7353 |0.7502 |0.7553 | |mmlu (acc) |0.2812 |0.2512 |0.4077 |0.438 |0.5964 | |winogrande (acc) |0.6725 |0.6709 |0.6914 |0.7332 |0.7364 | |wnli (acc) |0.5775 |0.4789 |0.4648 |0.493 |0.5775 | |truthfulqa (acc) |0.3015 |0.2826 |0.3205 |0.3401 |0.3537 | |logiqa (acc) |0.2335 |0.2151 |0.2535 |0.2458 |0.2427 | |logiqa2 (acc) |0.2506 |0.2252 |0.2564 |0.2621 |0.3022 | |sciq (acc) |0.927 |0.944 |0.939 |0.93 |0.959 | |piqa (acc) |0.7878 |0.7949 |0.7807 |0.7764 |0.8052 | |arc_easy (acc) |0.7353 |0.7479 |0.7643 |0.7445 |0.8081 | |arc_challenge (acc) |0.3677 |0.4027 |0.4309 |0.4155 |0.5009 | |hellaswag (acc) |0.5572 |0.5772 |0.5713 |0.56 |0.6131 | |openbookqa (acc) |0.292 |0.306 |0.316 |0.304 |0.33 | |mathqa (acc) |0.26 |0.2884 |0.2801 |0.2593 |0.3554 | |arithmetic (acc) |0.0069 |0.2367 |0.4703 |0.1703 |0.9004 | See the following, for the full details on this model: [https://blog.rwkv.com/p/eaglex-v2-soaring-past-llama2-7b](https://blog.rwkv.com/p/eaglex-v2-soaring-past-llama2-7b) ## Links - [Our wiki](https://wiki.rwkv.com) - [Full eval data](https://docs.google.com/spreadsheets/d/1CBLU6yKkW-8FMvGD4INO3qjeHZ0qkKnZFcM6n6lWNOs/edit#gid=912381775) - [Recursal.AI Cloud Platform](https://recursal.ai) - [HF Gradio Demo](https://huggingface.co/spaces/RWKV/v5-EagleX-v2-7B-gradio) - [Blog article, detailing our model launch](https://blog.rwkv.com/p/eaglex-v2-soaring-past-llama2-7b) ## Acknowledgement We are grateful for the help and support from the following key groups: - [Recursal.ai](https://recursal.ai) team for financing the GPU resources, and managing the training of this foundation model - you can run the Eagle line of RWKV models on their cloud / on-premise platform today. - EleutherAI for their support, especially in the v5/v6 Eagle/Finch paper - Linux Foundation AI & Data group for supporting and hosting the RWKV project
{"language": ["en"], "license": "apache-2.0", "datasets": ["cerebras/SlimPajama-627B", "EleutherAI/pile"]}
RWKV/v5-EagleX-v2-7B-pth
null
[ "en", "dataset:cerebras/SlimPajama-627B", "dataset:EleutherAI/pile", "license:apache-2.0", "has_space", "region:us" ]
null
2024-04-17T04:48:41+00:00
[]
[ "en" ]
TAGS #en #dataset-cerebras/SlimPajama-627B #dataset-EleutherAI/pile #license-apache-2.0 #has_space #region-us
!An eagle flying high up in the sky ### RWKV EagleX 7B v2 Model > > !Important!: This is not meant to be used with huggingface transformers library > > Use the Hugging Face varient instead, found here (v5-EagleX-v2-7B-HF) > > > The following is the raw representation of the EagleX 7B v2 model. For use with our own set of trainers > > > This is not an instruct tune model! (soon...) > > > Quickstart with the hugging face transformer library ---------------------------------------------------- See the huggingface version here (v5-EagleX-v2-7B-HF) Evaluation ---------- The following shows the progression of the model from 1.1T trained to 2.25T trained. Compared against other top performing models in the same weight class. See the following, for the full details on this model: URL Links ----- * Our wiki * Full eval data * Recursal.AI Cloud Platform * HF Gradio Demo * Blog article, detailing our model launch Acknowledgement --------------- We are grateful for the help and support from the following key groups: * URL team for financing the GPU resources, and managing the training of this foundation model - you can run the Eagle line of RWKV models on their cloud / on-premise platform today. * EleutherAI for their support, especially in the v5/v6 Eagle/Finch paper * Linux Foundation AI & Data group for supporting and hosting the RWKV project
[ "### RWKV EagleX 7B v2 Model\n\n\n\n> \n> !Important!: This is not meant to be used with huggingface transformers library \n> \n> Use the Hugging Face varient instead, found here (v5-EagleX-v2-7B-HF)\n> \n> \n> The following is the raw representation of the EagleX 7B v2 model. For use with our own set of trainers\n> \n> \n> This is not an instruct tune model! (soon...)\n> \n> \n> \n\n\nQuickstart with the hugging face transformer library\n----------------------------------------------------\n\n\nSee the huggingface version here (v5-EagleX-v2-7B-HF)\n\n\nEvaluation\n----------\n\n\nThe following shows the progression of the model from 1.1T trained to 2.25T trained.\n\n\n\nCompared against other top performing models in the same weight class.\n\n\n\nSee the following, for the full details on this model: URL\n\n\nLinks\n-----\n\n\n* Our wiki\n* Full eval data\n* Recursal.AI Cloud Platform\n* HF Gradio Demo\n* Blog article, detailing our model launch\n\n\nAcknowledgement\n---------------\n\n\nWe are grateful for the help and support from the following key groups:\n\n\n* URL team for financing the GPU resources, and managing the training of this foundation model - you can run the Eagle line of RWKV models on their cloud / on-premise platform today.\n* EleutherAI for their support, especially in the v5/v6 Eagle/Finch paper\n* Linux Foundation AI & Data group for supporting and hosting the RWKV project" ]
[ "TAGS\n#en #dataset-cerebras/SlimPajama-627B #dataset-EleutherAI/pile #license-apache-2.0 #has_space #region-us \n", "### RWKV EagleX 7B v2 Model\n\n\n\n> \n> !Important!: This is not meant to be used with huggingface transformers library \n> \n> Use the Hugging Face varient instead, found here (v5-EagleX-v2-7B-HF)\n> \n> \n> The following is the raw representation of the EagleX 7B v2 model. For use with our own set of trainers\n> \n> \n> This is not an instruct tune model! (soon...)\n> \n> \n> \n\n\nQuickstart with the hugging face transformer library\n----------------------------------------------------\n\n\nSee the huggingface version here (v5-EagleX-v2-7B-HF)\n\n\nEvaluation\n----------\n\n\nThe following shows the progression of the model from 1.1T trained to 2.25T trained.\n\n\n\nCompared against other top performing models in the same weight class.\n\n\n\nSee the following, for the full details on this model: URL\n\n\nLinks\n-----\n\n\n* Our wiki\n* Full eval data\n* Recursal.AI Cloud Platform\n* HF Gradio Demo\n* Blog article, detailing our model launch\n\n\nAcknowledgement\n---------------\n\n\nWe are grateful for the help and support from the following key groups:\n\n\n* URL team for financing the GPU resources, and managing the training of this foundation model - you can run the Eagle line of RWKV models on their cloud / on-premise platform today.\n* EleutherAI for their support, especially in the v5/v6 Eagle/Finch paper\n* Linux Foundation AI & Data group for supporting and hosting the RWKV project" ]
text-generation
transformers
# WizardLaker 7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63cf23cffbd0cc580bc65c73/PM2o9ow7b8rHQZ2sM3xv1.png) This is a merge of the new WizardLM 2 7B model with my custom DolphinLake Model(https://huggingface.co/Noodlz/DolphinLake-7B). Seems to perform well. will be submitting for evals on openLLM leaderboards. Created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using amazingvince/Not-WizardLM-2-7B as a base. ### Models Merged The following models were included in the merge: * /Noodlz/DolphinLake-7B ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties parameters: int8_mask: true t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors embed_slerp: true models: - model: amazingvince/Not-WizardLM-2-7B # No parameters necessary for base model - model: /Noodlz/DolphinLake-7B parameters: density: 0.58 weight: 0.4 base_model: amazingvince/Not-WizardLM-2-7B tokenizer_source: model:amazingvince/Not-WizardLM-2-7B dtype: bfloat16 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []}
Noodlz/WizardLaker-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T04:48:49+00:00
[ "2311.03099", "2306.01708" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2311.03099 #arxiv-2306.01708 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# WizardLaker 7B !image/png This is a merge of the new WizardLM 2 7B model with my custom DolphinLake Model(URL Seems to perform well. will be submitting for evals on openLLM leaderboards. Created using mergekit. ## Merge Details ### Merge Method This model was merged using the DARE TIES merge method using amazingvince/Not-WizardLM-2-7B as a base. ### Models Merged The following models were included in the merge: * /Noodlz/DolphinLake-7B ### Configuration The following YAML configuration was used to produce this model:
[ "# WizardLaker 7B\n\n!image/png\n\nThis is a merge of the new WizardLM 2 7B model with my custom DolphinLake Model(URL Seems to perform well. will be submitting for evals on openLLM leaderboards.\nCreated using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the DARE TIES merge method using amazingvince/Not-WizardLM-2-7B as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* /Noodlz/DolphinLake-7B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2311.03099 #arxiv-2306.01708 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# WizardLaker 7B\n\n!image/png\n\nThis is a merge of the new WizardLM 2 7B model with my custom DolphinLake Model(URL Seems to perform well. will be submitting for evals on openLLM leaderboards.\nCreated using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the DARE TIES merge method using amazingvince/Not-WizardLM-2-7B as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* /Noodlz/DolphinLake-7B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
kai-oh/mistral-7b-ift-best-v4-hf
null
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T04:48:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png) # flammen18X-mistral-7B A Mistral 7B LLM built from merging pretrained models and finetuning on [ResplendentAI/NSFW_RP_Format_DPO](https://huggingface.co/datasets/ResplendentAI/NSFW_RP_Format_DPO). Flammen specializes in exceptional character roleplay, creative writing, and general intelligence ### Method Finetuned using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) ### Configuration LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=2, gradient_accumulation_steps=8, gradient_checkpointing=True, learning_rate=5e-5, lr_scheduler_type="cosine", max_steps=420, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, max_prompt_length=1024, max_length=1536, force_use_ref_model=True ) # Fine-tune model with DPO dpo_trainer.train() ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["nsfw", "not-for-all-audiences"], "datasets": ["ResplendentAI/NSFW_RP_Format_DPO"], "base_model": ["flammenai/flammen18-mistral-7B"]}
flammenai/flammen18X-mistral-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "nsfw", "not-for-all-audiences", "dataset:ResplendentAI/NSFW_RP_Format_DPO", "base_model:flammenai/flammen18-mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T04:49:07+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #nsfw #not-for-all-audiences #dataset-ResplendentAI/NSFW_RP_Format_DPO #base_model-flammenai/flammen18-mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!image/png # flammen18X-mistral-7B A Mistral 7B LLM built from merging pretrained models and finetuning on ResplendentAI/NSFW_RP_Format_DPO. Flammen specializes in exceptional character roleplay, creative writing, and general intelligence ### Method Finetuned using an A100 on Google Colab. Fine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne ### Configuration LoRA, model, and training settings:
[ "# flammen18X-mistral-7B\n\nA Mistral 7B LLM built from merging pretrained models and finetuning on ResplendentAI/NSFW_RP_Format_DPO. \nFlammen specializes in exceptional character roleplay, creative writing, and general intelligence", "### Method\n\nFinetuned using an A100 on Google Colab.\n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne", "### Configuration\n\nLoRA, model, and training settings:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #nsfw #not-for-all-audiences #dataset-ResplendentAI/NSFW_RP_Format_DPO #base_model-flammenai/flammen18-mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# flammen18X-mistral-7B\n\nA Mistral 7B LLM built from merging pretrained models and finetuning on ResplendentAI/NSFW_RP_Format_DPO. \nFlammen specializes in exceptional character roleplay, creative writing, and general intelligence", "### Method\n\nFinetuned using an A100 on Google Colab.\n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne", "### Configuration\n\nLoRA, model, and training settings:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.4685 - F1 Score: 0.7955 - Accuracy: 0.7967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5946 | 7.41 | 200 | 0.5181 | 0.7242 | 0.7305 | | 0.5101 | 14.81 | 400 | 0.4879 | 0.7543 | 0.7563 | | 0.4823 | 22.22 | 600 | 0.4674 | 0.7718 | 0.7739 | | 0.4587 | 29.63 | 800 | 0.4527 | 0.7799 | 0.7803 | | 0.4441 | 37.04 | 1000 | 0.4451 | 0.7834 | 0.7844 | | 0.4322 | 44.44 | 1200 | 0.4384 | 0.7897 | 0.7910 | | 0.4236 | 51.85 | 1400 | 0.4384 | 0.7863 | 0.7887 | | 0.4145 | 59.26 | 1600 | 0.4270 | 0.7957 | 0.7967 | | 0.4093 | 66.67 | 1800 | 0.4280 | 0.7965 | 0.7973 | | 0.4045 | 74.07 | 2000 | 0.4224 | 0.7995 | 0.8004 | | 0.4009 | 81.48 | 2200 | 0.4181 | 0.8005 | 0.8013 | | 0.3944 | 88.89 | 2400 | 0.4210 | 0.8015 | 0.8022 | | 0.3899 | 96.3 | 2600 | 0.4220 | 0.8012 | 0.8015 | | 0.3871 | 103.7 | 2800 | 0.4174 | 0.8024 | 0.8037 | | 0.3805 | 111.11 | 3000 | 0.4174 | 0.8031 | 0.8036 | | 0.378 | 118.52 | 3200 | 0.4160 | 0.8051 | 0.8065 | | 0.3719 | 125.93 | 3400 | 0.4182 | 0.8055 | 0.8059 | | 0.3695 | 133.33 | 3600 | 0.4261 | 0.8060 | 0.8068 | | 0.3638 | 140.74 | 3800 | 0.4232 | 0.8031 | 0.8040 | | 0.362 | 148.15 | 4000 | 0.4271 | 0.8062 | 0.8074 | | 0.3568 | 155.56 | 4200 | 0.4268 | 0.8038 | 0.8050 | | 0.3529 | 162.96 | 4400 | 0.4247 | 0.8063 | 0.8071 | | 0.3499 | 170.37 | 4600 | 0.4262 | 0.8044 | 0.8058 | | 0.3461 | 177.78 | 4800 | 0.4247 | 0.8064 | 0.8077 | | 0.3431 | 185.19 | 5000 | 0.4315 | 0.8053 | 0.8064 | | 0.3406 | 192.59 | 5200 | 0.4328 | 0.8048 | 0.8064 | | 0.337 | 200.0 | 5400 | 0.4297 | 0.8052 | 0.8062 | | 0.3335 | 207.41 | 5600 | 0.4345 | 0.8050 | 0.8061 | | 0.3313 | 214.81 | 5800 | 0.4340 | 0.8036 | 0.8050 | | 0.3277 | 222.22 | 6000 | 0.4359 | 0.8052 | 0.8062 | | 0.3277 | 229.63 | 6200 | 0.4252 | 0.8040 | 0.8050 | | 0.3244 | 237.04 | 6400 | 0.4326 | 0.8062 | 0.8070 | | 0.3226 | 244.44 | 6600 | 0.4417 | 0.8054 | 0.8064 | | 0.3193 | 251.85 | 6800 | 0.4428 | 0.8053 | 0.8062 | | 0.3182 | 259.26 | 7000 | 0.4430 | 0.8062 | 0.8073 | | 0.3162 | 266.67 | 7200 | 0.4372 | 0.8072 | 0.8082 | | 0.3143 | 274.07 | 7400 | 0.4376 | 0.8049 | 0.8062 | | 0.312 | 281.48 | 7600 | 0.4419 | 0.8050 | 0.8061 | | 0.3118 | 288.89 | 7800 | 0.4416 | 0.8048 | 0.8058 | | 0.3104 | 296.3 | 8000 | 0.4388 | 0.8055 | 0.8065 | | 0.3078 | 303.7 | 8200 | 0.4407 | 0.8056 | 0.8065 | | 0.307 | 311.11 | 8400 | 0.4355 | 0.8062 | 0.8070 | | 0.3049 | 318.52 | 8600 | 0.4499 | 0.8067 | 0.8079 | | 0.3044 | 325.93 | 8800 | 0.4435 | 0.8064 | 0.8076 | | 0.3042 | 333.33 | 9000 | 0.4443 | 0.8077 | 0.8086 | | 0.3027 | 340.74 | 9200 | 0.4471 | 0.8078 | 0.8089 | | 0.3022 | 348.15 | 9400 | 0.4483 | 0.8054 | 0.8067 | | 0.3024 | 355.56 | 9600 | 0.4446 | 0.8067 | 0.8077 | | 0.3018 | 362.96 | 9800 | 0.4455 | 0.8065 | 0.8076 | | 0.3005 | 370.37 | 10000 | 0.4465 | 0.8069 | 0.8080 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_1-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T04:51:37+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_mouse\_1-seqsight\_65536\_512\_47M-L32\_all ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.4685 * F1 Score: 0.7955 * Accuracy: 0.7967 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_hh_shp1_400 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.0567 - Rewards/chosen: -14.7249 - Rewards/rejected: -15.7125 - Rewards/accuracies: 0.5500 - Rewards/margins: 0.9876 - Logps/rejected: -266.8951 - Logps/chosen: -241.5179 - Logits/rejected: -0.8700 - Logits/chosen: -0.9109 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0013 | 4.0 | 100 | 2.0768 | -1.5902 | -1.8351 | 0.4800 | 0.2449 | -251.4758 | -226.9239 | -0.8311 | -0.8228 | | 0.1427 | 8.0 | 200 | 3.3444 | -6.8511 | -7.5916 | 0.5800 | 0.7405 | -257.8718 | -232.7693 | -0.7115 | -0.7215 | | 0.0029 | 12.0 | 300 | 4.2892 | -2.5538 | -3.3508 | 0.5200 | 0.7970 | -253.1599 | -227.9944 | -0.9768 | -0.9928 | | 0.0 | 16.0 | 400 | 5.0370 | -14.7509 | -15.7815 | 0.5300 | 1.0306 | -266.9717 | -241.5469 | -0.8694 | -0.9104 | | 0.0 | 20.0 | 500 | 5.0695 | -14.7352 | -15.7245 | 0.5400 | 0.9894 | -266.9084 | -241.5294 | -0.8698 | -0.9112 | | 0.0 | 24.0 | 600 | 5.0615 | -14.7459 | -15.7542 | 0.5500 | 1.0083 | -266.9414 | -241.5412 | -0.8694 | -0.9109 | | 0.0 | 28.0 | 700 | 5.0597 | -14.7540 | -15.7286 | 0.5300 | 0.9747 | -266.9130 | -241.5502 | -0.8700 | -0.9110 | | 0.0 | 32.0 | 800 | 5.0420 | -14.7242 | -15.7535 | 0.5400 | 1.0293 | -266.9406 | -241.5171 | -0.8695 | -0.9111 | | 0.0 | 36.0 | 900 | 5.0441 | -14.7134 | -15.7250 | 0.5400 | 1.0116 | -266.9089 | -241.5051 | -0.8697 | -0.9110 | | 0.0 | 40.0 | 1000 | 5.0567 | -14.7249 | -15.7125 | 0.5500 | 0.9876 | -266.8951 | -241.5179 | -0.8700 | -0.9109 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_shp1_400", "results": []}]}
guoyu-zhang/model_hh_shp1_400
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-17T04:51:52+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
model\_hh\_shp1\_400 ==================== This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 5.0567 * Rewards/chosen: -14.7249 * Rewards/rejected: -15.7125 * Rewards/accuracies: 0.5500 * Rewards/margins: 0.9876 * Logps/rejected: -266.8951 * Logps/chosen: -241.5179 * Logits/rejected: -0.8700 * Logits/chosen: -0.9109 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 4 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 100 * training\_steps: 1000 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.9630 - F1 Score: 0.5363 - Accuracy: 0.5364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6709 | 25.0 | 200 | 0.7104 | 0.5514 | 0.5512 | | 0.6081 | 50.0 | 400 | 0.7520 | 0.5332 | 0.5358 | | 0.5559 | 75.0 | 600 | 0.7935 | 0.5472 | 0.5486 | | 0.5218 | 100.0 | 800 | 0.8150 | 0.5376 | 0.5374 | | 0.5034 | 125.0 | 1000 | 0.8141 | 0.5418 | 0.5417 | | 0.4907 | 150.0 | 1200 | 0.8255 | 0.5510 | 0.5512 | | 0.4829 | 175.0 | 1400 | 0.8471 | 0.5455 | 0.5475 | | 0.4753 | 200.0 | 1600 | 0.8409 | 0.5481 | 0.5481 | | 0.4669 | 225.0 | 1800 | 0.8286 | 0.5502 | 0.5502 | | 0.4605 | 250.0 | 2000 | 0.8440 | 0.5438 | 0.5438 | | 0.4521 | 275.0 | 2200 | 0.8756 | 0.5428 | 0.5433 | | 0.4428 | 300.0 | 2400 | 0.8696 | 0.5456 | 0.5454 | | 0.4333 | 325.0 | 2600 | 0.8581 | 0.5438 | 0.5465 | | 0.4227 | 350.0 | 2800 | 0.8902 | 0.5438 | 0.5449 | | 0.4109 | 375.0 | 3000 | 0.8924 | 0.5433 | 0.5433 | | 0.3994 | 400.0 | 3200 | 0.9358 | 0.5466 | 0.5481 | | 0.388 | 425.0 | 3400 | 0.9573 | 0.5509 | 0.5512 | | 0.3742 | 450.0 | 3600 | 0.9472 | 0.5588 | 0.5587 | | 0.3611 | 475.0 | 3800 | 0.9667 | 0.5487 | 0.5528 | | 0.3506 | 500.0 | 4000 | 1.0085 | 0.5577 | 0.5576 | | 0.34 | 525.0 | 4200 | 1.0322 | 0.5577 | 0.5576 | | 0.3274 | 550.0 | 4400 | 0.9904 | 0.5552 | 0.5555 | | 0.3158 | 575.0 | 4600 | 1.0181 | 0.5507 | 0.5507 | | 0.3046 | 600.0 | 4800 | 1.0350 | 0.5549 | 0.5550 | | 0.2928 | 625.0 | 5000 | 1.0384 | 0.5510 | 0.5512 | | 0.2847 | 650.0 | 5200 | 1.0845 | 0.5535 | 0.5534 | | 0.2754 | 675.0 | 5400 | 1.1063 | 0.5535 | 0.5534 | | 0.2688 | 700.0 | 5600 | 1.1460 | 0.5549 | 0.5550 | | 0.2587 | 725.0 | 5800 | 1.1243 | 0.5557 | 0.5555 | | 0.2506 | 750.0 | 6000 | 1.1989 | 0.5525 | 0.5523 | | 0.2434 | 775.0 | 6200 | 1.1586 | 0.5482 | 0.5481 | | 0.2382 | 800.0 | 6400 | 1.1869 | 0.5509 | 0.5507 | | 0.2307 | 825.0 | 6600 | 1.2121 | 0.5426 | 0.5428 | | 0.2275 | 850.0 | 6800 | 1.1873 | 0.5353 | 0.5353 | | 0.2211 | 875.0 | 7000 | 1.1901 | 0.5450 | 0.5449 | | 0.216 | 900.0 | 7200 | 1.2012 | 0.5503 | 0.5502 | | 0.2109 | 925.0 | 7400 | 1.2088 | 0.5476 | 0.5475 | | 0.2068 | 950.0 | 7600 | 1.2467 | 0.5402 | 0.5406 | | 0.2043 | 975.0 | 7800 | 1.2640 | 0.5402 | 0.5401 | | 0.2001 | 1000.0 | 8000 | 1.2730 | 0.5432 | 0.5433 | | 0.1968 | 1025.0 | 8200 | 1.2403 | 0.5461 | 0.5459 | | 0.1934 | 1050.0 | 8400 | 1.2645 | 0.5323 | 0.5321 | | 0.1924 | 1075.0 | 8600 | 1.2582 | 0.5434 | 0.5433 | | 0.1908 | 1100.0 | 8800 | 1.2580 | 0.5403 | 0.5401 | | 0.1876 | 1125.0 | 9000 | 1.2909 | 0.5423 | 0.5422 | | 0.1872 | 1150.0 | 9200 | 1.2931 | 0.5445 | 0.5443 | | 0.1847 | 1175.0 | 9400 | 1.2972 | 0.5450 | 0.5449 | | 0.1825 | 1200.0 | 9600 | 1.2851 | 0.5428 | 0.5428 | | 0.1819 | 1225.0 | 9800 | 1.3081 | 0.5439 | 0.5438 | | 0.1819 | 1250.0 | 10000 | 1.2927 | 0.5418 | 0.5417 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_4-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T04:52:08+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_mouse\_4-seqsight\_65536\_512\_47M-L32\_all ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.9630 * F1 Score: 0.5363 * Accuracy: 0.5364 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-car0004-addrealimg This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0102 - Accuracy: 0.9969 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.13 | 0.99 | 50 | 0.0489 | 0.9883 | | 0.0305 | 1.99 | 101 | 0.0102 | 0.9951 | | 0.0188 | 3.0 | 152 | 0.0125 | 0.9957 | | 0.0148 | 4.0 | 203 | 0.0102 | 0.9969 | | 0.0185 | 4.93 | 250 | 0.0102 | 0.9969 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "swin-tiny-patch4-window7-224-finetuned-car0004-addrealimg", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9969211822660099, "name": "Accuracy"}]}]}]}
tsware/swin-tiny-patch4-window7-224-finetuned-car0004-addrealimg
null
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T04:54:55+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
swin-tiny-patch4-window7-224-finetuned-car0004-addrealimg ========================================================= This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.0102 * Accuracy: 0.9969 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 2.2061 - F1 Score: 0.6943 - Accuracy: 0.6946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:| | 0.3842 | 200.0 | 200 | 1.2987 | 0.6440 | 0.6444 | | 0.0869 | 400.0 | 400 | 1.5612 | 0.6778 | 0.6778 | | 0.043 | 600.0 | 600 | 1.7566 | 0.6736 | 0.6736 | | 0.0282 | 800.0 | 800 | 1.8686 | 0.6904 | 0.6904 | | 0.0205 | 1000.0 | 1000 | 2.0206 | 0.6860 | 0.6862 | | 0.0168 | 1200.0 | 1200 | 1.9966 | 0.6945 | 0.6946 | | 0.0141 | 1400.0 | 1400 | 2.0739 | 0.7105 | 0.7113 | | 0.0119 | 1600.0 | 1600 | 2.1799 | 0.6987 | 0.6987 | | 0.0102 | 1800.0 | 1800 | 2.2821 | 0.6862 | 0.6862 | | 0.0094 | 2000.0 | 2000 | 2.2383 | 0.6778 | 0.6778 | | 0.0085 | 2200.0 | 2200 | 2.2737 | 0.6818 | 0.6820 | | 0.0078 | 2400.0 | 2400 | 2.3127 | 0.6987 | 0.6987 | | 0.007 | 2600.0 | 2600 | 2.3584 | 0.6903 | 0.6904 | | 0.0065 | 2800.0 | 2800 | 2.2929 | 0.6945 | 0.6946 | | 0.0066 | 3000.0 | 3000 | 2.3720 | 0.7105 | 0.7113 | | 0.0062 | 3200.0 | 3200 | 2.3486 | 0.7111 | 0.7113 | | 0.0059 | 3400.0 | 3400 | 2.3332 | 0.6902 | 0.6904 | | 0.0061 | 3600.0 | 3600 | 2.5363 | 0.6860 | 0.6862 | | 0.0051 | 3800.0 | 3800 | 2.4039 | 0.6820 | 0.6820 | | 0.0049 | 4000.0 | 4000 | 2.4326 | 0.6820 | 0.6820 | | 0.0048 | 4200.0 | 4200 | 2.6798 | 0.6980 | 0.6987 | | 0.0048 | 4400.0 | 4400 | 2.6345 | 0.6694 | 0.6695 | | 0.0046 | 4600.0 | 4600 | 2.4664 | 0.6980 | 0.6987 | | 0.0042 | 4800.0 | 4800 | 2.4284 | 0.6986 | 0.6987 | | 0.0043 | 5000.0 | 5000 | 2.4493 | 0.6858 | 0.6862 | | 0.0038 | 5200.0 | 5200 | 2.6022 | 0.6943 | 0.6946 | | 0.0041 | 5400.0 | 5400 | 2.5375 | 0.6940 | 0.6946 | | 0.0039 | 5600.0 | 5600 | 2.4498 | 0.6736 | 0.6736 | | 0.0036 | 5800.0 | 5800 | 2.5372 | 0.6860 | 0.6862 | | 0.0032 | 6000.0 | 6000 | 2.9134 | 0.6735 | 0.6736 | | 0.0034 | 6200.0 | 6200 | 2.6953 | 0.6695 | 0.6695 | | 0.0031 | 6400.0 | 6400 | 2.7115 | 0.6736 | 0.6736 | | 0.0032 | 6600.0 | 6600 | 2.7506 | 0.7070 | 0.7071 | | 0.0031 | 6800.0 | 6800 | 2.8463 | 0.6903 | 0.6904 | | 0.0032 | 7000.0 | 7000 | 2.6918 | 0.6904 | 0.6904 | | 0.0028 | 7200.0 | 7200 | 2.7421 | 0.7028 | 0.7029 | | 0.0029 | 7400.0 | 7400 | 2.5392 | 0.6819 | 0.6820 | | 0.0028 | 7600.0 | 7600 | 2.7772 | 0.6819 | 0.6820 | | 0.0026 | 7800.0 | 7800 | 2.9030 | 0.6901 | 0.6904 | | 0.0025 | 8000.0 | 8000 | 2.8849 | 0.6903 | 0.6904 | | 0.0026 | 8200.0 | 8200 | 2.9484 | 0.6903 | 0.6904 | | 0.0025 | 8400.0 | 8400 | 2.8952 | 0.6862 | 0.6862 | | 0.0023 | 8600.0 | 8600 | 2.8349 | 0.6778 | 0.6778 | | 0.0025 | 8800.0 | 8800 | 2.9036 | 0.6862 | 0.6862 | | 0.0024 | 9000.0 | 9000 | 2.9186 | 0.6986 | 0.6987 | | 0.0022 | 9200.0 | 9200 | 2.9094 | 0.6778 | 0.6778 | | 0.0023 | 9400.0 | 9400 | 2.9813 | 0.6861 | 0.6862 | | 0.0022 | 9600.0 | 9600 | 2.9257 | 0.6904 | 0.6904 | | 0.0022 | 9800.0 | 9800 | 2.9049 | 0.6945 | 0.6946 | | 0.0021 | 10000.0 | 10000 | 2.9241 | 0.6945 | 0.6946 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_3-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T04:55:06+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_mouse\_3-seqsight\_65536\_512\_47M-L32\_all ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset. It achieves the following results on the evaluation set: * Loss: 2.2061 * F1 Score: 0.6943 * Accuracy: 0.6946 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
<!-- header start --> <p align="center"> <img src="https://i.imgur.com/mNM6Cai.png" width="100%" alt="Friendli Logo"> </p> <!-- header end --> # C4AI Command R+ FP8 - Model creator: [CohereForAI](https://cohere.com/) - Original model: [c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) ## Description This repo contains the c4ai-command-r-plus model quantized to FP8 by FriendliAI, significantly enhancing its inference efficiency while maintaining high accuracy. Note that FP8 is only supported by NVIDIA Ada, Hopper, and Blackwell GPU architectures. Check out [FriendliAI documentation](https://docs.friendli.ai/) for more details. ## Compatibility This model is compatible with **[Friendli Container](https://friendli.ai/products/container/)**. ## Prerequisites - Before you begin, make sure you have signed up for [Friendli Suite](https://suite.friendli.ai/). **You can use Friendli Containers free of charge for four weeks.** - Prepare a Personal Access Token following [this guide](#preparing-personal-access-token). - Prepare a Friendli Container Secret following [this guide](#preparing-container-secret). - Install Hugging Face CLI with `pip install -U "huggingface_hub[cli]"` ### Preparing Personal Access Token PAT (Personal Access Token) is the user credential for for logging into our container registry. 1. Sign in [Friendli Suite](https://suite.friendli.ai/). 2. Go to **[User Settings > Tokens](https://suite.friendli.ai/user-settings/tokens)** and click **'Create new token'**. 3. Save your created token value. ### Pulling Friendli Container Image 1. Log in to the Docker client using the personal access token created as outlined in [this guide](#preparing-personal-access-token). ```sh export FRIENDLI_PAT="YOUR PAT" docker login registry.friendli.ai -u $YOUR_EMAIL -p $FRIENDLI_PAT ``` 2. Pull image ```sh docker pull registry.friendli.ai/trial ``` ## Running Friendli Container Once you've prepared the image of Friendli Container, you can launch it to create a serving endpoint. ```sh docker run \ --gpus '"device=0,1,2,3"' \ -p 8000:8000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ -e FRIENDLI_CONTAINER_SECRET="YOUR CONTAINER SECRET" \ registry.friendli.ai/trial \ --web-server-port 8000 \ --hf-model-name FriendliAI/c4ai-command-r-plus-fp8 \ --num-devices 4 # Use tensor parallelism degree 4 ``` --- # Original model card: CohereForAI's C4AI Command R+ # C4AI Command R+ 🚨 **This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit)**. ## Model Summary C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering. C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R](https://huggingface.co/CohereForAI/c4ai-command-r-v01) Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-plus - Model Size: 104 billion parameters - Context length: 128K **Try C4AI Command R+** You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus). **Usage** Please install `transformers` from the source repository that includes the necessary changes for this model. ```python # pip install 'git+https://github.com/huggingface/transformers.git' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 8-bit precision** ```python # pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_8bit=True) model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 4-bit precision** This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit). ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. **Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic. Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian. **Context length**: Command R+ supports a context length of 128K. ## Evaluations Command R+ has been submitted to the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a [standardized way](https://github.com/EleutherAI/lm-evaluation-harness) using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way. | Model | Average | Arc (Challenge) | Hella Swag | MMLU | Truthful QA | Winogrande | GSM8k | |:--------------------------------|----------:|------------------:|-------------:|-------:|--------------:|-------------:|--------:| | **CohereForAI/c4ai-command-r-plus** | 74.6 | 70.99 | 88.6 | 75.7 | 56.3 | 85.4 | 70.7 | | [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct) | 74.5 | 68.9 | 89 | 73.7 | 66.9 | 81.8 | 66.9 | | [Mixtral 8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.7 | 70.1 | 87.6 | 71.4 | 65 | 81.1 | 61.1 | | [Mixtral 8x7B Chat](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.6 | 70.2 | 87.6 | 71.2 | 64.6 | 81.4 | 60.7 | | [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) | 68.5 | 65.5 | 87 | 68.2 | 52.3 | 81.5 | 56.6 | | [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) | 67.9 | 67.3 | 87.3 | 69.8 | 44.9 | 83.7 | 54.1 | | [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 65.3 | 65.4 | 84.2 | 74.9 | 55.4 | 80.1 | 31.9 | | [Gemma-7B](https://huggingface.co/google/gemma-7b) | 63.8 | 61.1 | 82.2 | 64.6 | 44.8 | 79 | 50.9 | | [LLama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | 62.4 | 64.6 | 85.9 | 63.9 | 52.8 | 80.5 | 26.7 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 61 | 60 | 83.3 | 64.2 | 42.2 | 78.4 | 37.8 | We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more [here](https://txt.cohere.com/command-r-plus-microsoft-azure/). For evaluation of open ended generation, Command R+ is currently being evaluated on the [chatbot arena](https://chat.lmsys.org/). ### Tool use & multihop capabilities: Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation. Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required. Comprehensive documentation for working with command R+'s tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Grounded Generation and RAG Capabilities: Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Code Capabilities: Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command R+ chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["pretrained"], "model_name": "CohereForAI/c4ai-command-r-plus", "base_model": "CohereForAI/c4ai-command-r-plus", "inference": false, "model_link": "https://huggingface.co/CohereForAI/c4ai-command-r-plus", "pipeline_tag": "text-generation", "quantized_by": "FriendliAI"}
FriendliAI/c4ai-command-r-plus-fp8
null
[ "transformers", "safetensors", "cohere", "text-generation", "pretrained", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "base_model:CohereForAI/c4ai-command-r-plus", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T04:59:21+00:00
[]
[ "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar" ]
TAGS #transformers #safetensors #cohere #text-generation #pretrained #conversational #en #fr #de #es #it #pt #ja #ko #zh #ar #base_model-CohereForAI/c4ai-command-r-plus #license-cc-by-nc-4.0 #autotrain_compatible #text-generation-inference #8-bit #region-us
![Friendli Logo](https://i.URL width=) C4AI Command R+ FP8 =================== * Model creator: CohereForAI * Original model: c4ai-command-r-plus Description ----------- This repo contains the c4ai-command-r-plus model quantized to FP8 by FriendliAI, significantly enhancing its inference efficiency while maintaining high accuracy. Note that FP8 is only supported by NVIDIA Ada, Hopper, and Blackwell GPU architectures. Check out FriendliAI documentation for more details. Compatibility ------------- This model is compatible with Friendli Container. Prerequisites ------------- * Before you begin, make sure you have signed up for Friendli Suite. You can use Friendli Containers free of charge for four weeks. * Prepare a Personal Access Token following this guide. * Prepare a Friendli Container Secret following this guide. * Install Hugging Face CLI with 'pip install -U "huggingface\_hub[cli]"' ### Preparing Personal Access Token PAT (Personal Access Token) is the user credential for for logging into our container registry. 1. Sign in Friendli Suite. 2. Go to User Settings > Tokens and click 'Create new token'. 3. Save your created token value. ### Pulling Friendli Container Image 1. Log in to the Docker client using the personal access token created as outlined in this guide. 2. Pull image Running Friendli Container -------------------------- Once you've prepared the image of Friendli Container, you can launch it to create a serving endpoint. --- Original model card: CohereForAI's C4AI Command R+ ================================================== C4AI Command R+ =============== This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes here. Model Summary ------------- C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering. C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is C4AI Command R Developed by: Cohere and Cohere For AI * Point of Contact: Cohere For AI: URL * License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy * Model: c4ai-command-r-plus * Model Size: 104 billion parameters * Context length: 128K Try C4AI Command R+ You can try out C4AI Command R+ before downloading the weights in our hosted Hugging Face Space. Usage Please install 'transformers' from the source repository that includes the necessary changes for this model. Quantized model through bitsandbytes, 8-bit precision Quantized model through bitsandbytes, 4-bit precision This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes here. Model Details ------------- Input: Models input text only. Output: Models generate text only. Model Architecture: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. Languages covered: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic. Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian. Context length: Command R+ supports a context length of 128K. Evaluations ----------- Command R+ has been submitted to the Open LLM leaderboard. We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a standardized way using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way. We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more here. For evaluation of open ended generation, Command R+ is currently being evaluated on the chatbot arena. ### Tool use & multihop capabilities: Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation. Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once. The model has been trained to recognise a special 'directly\_answer' tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the 'directly\_answer' tool, but it can be removed or renamed if required. Comprehensive documentation for working with command R+'s tool use prompt template can be found here. The code snippet below shows a minimal working example on how to render a prompt. **Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]** **Example Rendered Tool Use Prompt [CLICK TO EXPAND]** python def internet\_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet ``` Args: query (str): Query to search the internet with """ pass ``` python def directly\_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass json [ { "tool\_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]' **Example Rendered Tool Use Completion [CLICK TO EXPAND]** json [ { "tool\_name": "internet\_search", "parameters": { "query": "biggest penguin in the world" } } ] ' ### Grounded Generation and RAG Capabilities: Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as 'accurate' grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A 'fast' citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found here. The code snippet below shows a minimal working example on how to render a prompt. **Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]** ' **Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]** ' **Example Rendered Grounded Generation Completion [CLICK TO EXPAND]** ' ### Code Capabilities: Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact info@URL. ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a CC-BY-NC License with an acceptable use addendum, and also requires adhering to C4AI's Acceptable Use Policy. ### Try Chat: You can try Command R+ chat in the playground here. You can also use it in our dedicated Hugging Face Space here.
[ "### Preparing Personal Access Token\n\n\nPAT (Personal Access Token) is the user credential for for logging into our container registry.\n\n\n1. Sign in Friendli Suite.\n2. Go to User Settings > Tokens and click 'Create new token'.\n3. Save your created token value.", "### Pulling Friendli Container Image\n\n\n1. Log in to the Docker client using the personal access token created as outlined in this guide.\n2. Pull image\n\n\nRunning Friendli Container\n--------------------------\n\n\nOnce you've prepared the image of Friendli Container, you can launch it to create a serving endpoint.\n\n\n\n\n---\n\n\nOriginal model card: CohereForAI's C4AI Command R+\n==================================================\n\n\nC4AI Command R+\n===============\n\n\nThis model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes here.\n\n\nModel Summary\n-------------\n\n\nC4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering.\n\n\nC4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is C4AI Command R\n\n\nDeveloped by: Cohere and Cohere For AI\n\n\n* Point of Contact: Cohere For AI: URL\n* License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy\n* Model: c4ai-command-r-plus\n* Model Size: 104 billion parameters\n* Context length: 128K\n\n\nTry C4AI Command R+\n\n\nYou can try out C4AI Command R+ before downloading the weights in our hosted Hugging Face Space.\n\n\nUsage\n\n\nPlease install 'transformers' from the source repository that includes the necessary changes for this model.\n\n\nQuantized model through bitsandbytes, 8-bit precision\n\n\nQuantized model through bitsandbytes, 4-bit precision\n\n\nThis model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes here.\n\n\nModel Details\n-------------\n\n\nInput: Models input text only.\n\n\nOutput: Models generate text only.\n\n\nModel Architecture: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.\n\n\nLanguages covered: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.\n\n\nPre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.\n\n\nContext length: Command R+ supports a context length of 128K.\n\n\nEvaluations\n-----------\n\n\nCommand R+ has been submitted to the Open LLM leaderboard. We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a standardized way using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way.\n\n\n\nWe include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more here. For evaluation of open ended generation, Command R+ is currently being evaluated on the chatbot arena.", "### Tool use & multihop capabilities:\n\n\nCommand R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.\n\n\nCommand R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once.\n\n\nThe model has been trained to recognise a special 'directly\\_answer' tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions.\nWe recommend including the 'directly\\_answer' tool, but it can be removed or renamed if required.\n\n\nComprehensive documentation for working with command R+'s tool use prompt template can be found here.\n\n\nThe code snippet below shows a minimal working example on how to render a prompt.\n\n\n\n**Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]** \n\n\n**Example Rendered Tool Use Prompt [CLICK TO EXPAND]**\npython\ndef internet\\_search(query: str) -> List[Dict]:\n\"\"\"Returns a list of relevant document snippets for a textual query retrieved from the internet\n\n\n\n```\nArgs:\n query (str): Query to search the internet with\n\"\"\"\npass\n\n```\n\npython\ndef directly\\_answer() -> List[Dict]:\n\"\"\"Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history\n\"\"\"\npass\njson\n[\n{\n\"tool\\_name\": title of the tool in the specification,\n\"parameters\": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters\n}\n]'\n\n\n\n\n**Example Rendered Tool Use Completion [CLICK TO EXPAND]**\njson\n[\n{\n\"tool\\_name\": \"internet\\_search\",\n\"parameters\": {\n\"query\": \"biggest penguin in the world\"\n}\n}\n]\n'", "### Grounded Generation and RAG Capabilities:\n\n\nCommand R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.\n\n\nCommand R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.\n\n\nBy default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as 'accurate' grounded generation.\n\n\nThe model is trained with a number of other answering modes, which can be selected by prompt changes. A 'fast' citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.\n\n\nComprehensive documentation for working with Command R+'s grounded generation prompt template can be found here.\n\n\nThe code snippet below shows a minimal working example on how to render a prompt.\n\n\n\n **Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]** \n'\n\n\n\n\n**Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]**\n'\n\n\n\n\n**Example Rendered Grounded Generation Completion [CLICK TO EXPAND]**\n'", "### Code Capabilities:\n\n\nCommand R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.", "### Model Card Contact\n\n\nFor errors or additional questions about details in this model card, contact info@URL.", "### Terms of Use:\n\n\nWe hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a CC-BY-NC License with an acceptable use addendum, and also requires adhering to C4AI's Acceptable Use Policy.", "### Try Chat:\n\n\nYou can try Command R+ chat in the playground here. You can also use it in our dedicated Hugging Face Space here." ]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #pretrained #conversational #en #fr #de #es #it #pt #ja #ko #zh #ar #base_model-CohereForAI/c4ai-command-r-plus #license-cc-by-nc-4.0 #autotrain_compatible #text-generation-inference #8-bit #region-us \n", "### Preparing Personal Access Token\n\n\nPAT (Personal Access Token) is the user credential for for logging into our container registry.\n\n\n1. Sign in Friendli Suite.\n2. Go to User Settings > Tokens and click 'Create new token'.\n3. Save your created token value.", "### Pulling Friendli Container Image\n\n\n1. Log in to the Docker client using the personal access token created as outlined in this guide.\n2. Pull image\n\n\nRunning Friendli Container\n--------------------------\n\n\nOnce you've prepared the image of Friendli Container, you can launch it to create a serving endpoint.\n\n\n\n\n---\n\n\nOriginal model card: CohereForAI's C4AI Command R+\n==================================================\n\n\nC4AI Command R+\n===============\n\n\nThis model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes here.\n\n\nModel Summary\n-------------\n\n\nC4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering.\n\n\nC4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is C4AI Command R\n\n\nDeveloped by: Cohere and Cohere For AI\n\n\n* Point of Contact: Cohere For AI: URL\n* License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy\n* Model: c4ai-command-r-plus\n* Model Size: 104 billion parameters\n* Context length: 128K\n\n\nTry C4AI Command R+\n\n\nYou can try out C4AI Command R+ before downloading the weights in our hosted Hugging Face Space.\n\n\nUsage\n\n\nPlease install 'transformers' from the source repository that includes the necessary changes for this model.\n\n\nQuantized model through bitsandbytes, 8-bit precision\n\n\nQuantized model through bitsandbytes, 4-bit precision\n\n\nThis model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes here.\n\n\nModel Details\n-------------\n\n\nInput: Models input text only.\n\n\nOutput: Models generate text only.\n\n\nModel Architecture: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.\n\n\nLanguages covered: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.\n\n\nPre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.\n\n\nContext length: Command R+ supports a context length of 128K.\n\n\nEvaluations\n-----------\n\n\nCommand R+ has been submitted to the Open LLM leaderboard. We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a standardized way using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way.\n\n\n\nWe include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more here. For evaluation of open ended generation, Command R+ is currently being evaluated on the chatbot arena.", "### Tool use & multihop capabilities:\n\n\nCommand R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.\n\n\nCommand R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once.\n\n\nThe model has been trained to recognise a special 'directly\\_answer' tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions.\nWe recommend including the 'directly\\_answer' tool, but it can be removed or renamed if required.\n\n\nComprehensive documentation for working with command R+'s tool use prompt template can be found here.\n\n\nThe code snippet below shows a minimal working example on how to render a prompt.\n\n\n\n**Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]** \n\n\n**Example Rendered Tool Use Prompt [CLICK TO EXPAND]**\npython\ndef internet\\_search(query: str) -> List[Dict]:\n\"\"\"Returns a list of relevant document snippets for a textual query retrieved from the internet\n\n\n\n```\nArgs:\n query (str): Query to search the internet with\n\"\"\"\npass\n\n```\n\npython\ndef directly\\_answer() -> List[Dict]:\n\"\"\"Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history\n\"\"\"\npass\njson\n[\n{\n\"tool\\_name\": title of the tool in the specification,\n\"parameters\": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters\n}\n]'\n\n\n\n\n**Example Rendered Tool Use Completion [CLICK TO EXPAND]**\njson\n[\n{\n\"tool\\_name\": \"internet\\_search\",\n\"parameters\": {\n\"query\": \"biggest penguin in the world\"\n}\n}\n]\n'", "### Grounded Generation and RAG Capabilities:\n\n\nCommand R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.\n\n\nCommand R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.\n\n\nBy default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as 'accurate' grounded generation.\n\n\nThe model is trained with a number of other answering modes, which can be selected by prompt changes. A 'fast' citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.\n\n\nComprehensive documentation for working with Command R+'s grounded generation prompt template can be found here.\n\n\nThe code snippet below shows a minimal working example on how to render a prompt.\n\n\n\n **Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]** \n'\n\n\n\n\n**Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]**\n'\n\n\n\n\n**Example Rendered Grounded Generation Completion [CLICK TO EXPAND]**\n'", "### Code Capabilities:\n\n\nCommand R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.", "### Model Card Contact\n\n\nFor errors or additional questions about details in this model card, contact info@URL.", "### Terms of Use:\n\n\nWe hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a CC-BY-NC License with an acceptable use addendum, and also requires adhering to C4AI's Acceptable Use Policy.", "### Try Chat:\n\n\nYou can try Command R+ chat in the playground here. You can also use it in our dedicated Hugging Face Space here." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 1.4923 - F1 Score: 0.8199 - Accuracy: 0.8201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4054 | 100.0 | 200 | 0.7253 | 0.7405 | 0.7409 | | 0.1346 | 200.0 | 400 | 1.0998 | 0.7193 | 0.7195 | | 0.0689 | 300.0 | 600 | 1.2385 | 0.7530 | 0.7530 | | 0.0433 | 400.0 | 800 | 1.4264 | 0.7408 | 0.7409 | | 0.0303 | 500.0 | 1000 | 1.5952 | 0.7439 | 0.7439 | | 0.0237 | 600.0 | 1200 | 1.5753 | 0.7341 | 0.7348 | | 0.0196 | 700.0 | 1400 | 1.6173 | 0.7497 | 0.75 | | 0.0167 | 800.0 | 1600 | 1.7331 | 0.7529 | 0.7530 | | 0.014 | 900.0 | 1800 | 1.8038 | 0.7499 | 0.75 | | 0.013 | 1000.0 | 2000 | 1.8529 | 0.7469 | 0.7470 | | 0.0114 | 1100.0 | 2200 | 1.7976 | 0.7561 | 0.7561 | | 0.0106 | 1200.0 | 2400 | 1.8884 | 0.7529 | 0.7530 | | 0.0101 | 1300.0 | 2600 | 1.9043 | 0.7528 | 0.7530 | | 0.0097 | 1400.0 | 2800 | 1.9710 | 0.7528 | 0.7530 | | 0.0083 | 1500.0 | 3000 | 1.8886 | 0.7591 | 0.7591 | | 0.0081 | 1600.0 | 3200 | 1.8498 | 0.7530 | 0.7530 | | 0.0076 | 1700.0 | 3400 | 1.9551 | 0.7591 | 0.7591 | | 0.0077 | 1800.0 | 3600 | 1.9208 | 0.7561 | 0.7561 | | 0.0066 | 1900.0 | 3800 | 1.8954 | 0.7589 | 0.7591 | | 0.0069 | 2000.0 | 4000 | 1.8680 | 0.7407 | 0.7409 | | 0.0064 | 2100.0 | 4200 | 2.0258 | 0.7683 | 0.7683 | | 0.0059 | 2200.0 | 4400 | 1.9716 | 0.7587 | 0.7591 | | 0.0068 | 2300.0 | 4600 | 2.0603 | 0.7713 | 0.7713 | | 0.0059 | 2400.0 | 4800 | 2.0135 | 0.7651 | 0.7652 | | 0.0061 | 2500.0 | 5000 | 1.9758 | 0.7621 | 0.7622 | | 0.005 | 2600.0 | 5200 | 2.1556 | 0.7652 | 0.7652 | | 0.0053 | 2700.0 | 5400 | 2.0520 | 0.7498 | 0.75 | | 0.0054 | 2800.0 | 5600 | 2.2497 | 0.7560 | 0.7561 | | 0.0047 | 2900.0 | 5800 | 2.0620 | 0.7559 | 0.7561 | | 0.005 | 3000.0 | 6000 | 1.9706 | 0.7618 | 0.7622 | | 0.0045 | 3100.0 | 6200 | 2.1524 | 0.7587 | 0.7591 | | 0.0042 | 3200.0 | 6400 | 2.2165 | 0.7561 | 0.7561 | | 0.0049 | 3300.0 | 6600 | 1.9786 | 0.7589 | 0.7591 | | 0.0039 | 3400.0 | 6800 | 2.2495 | 0.7713 | 0.7713 | | 0.004 | 3500.0 | 7000 | 2.3557 | 0.7591 | 0.7591 | | 0.0039 | 3600.0 | 7200 | 2.1475 | 0.7621 | 0.7622 | | 0.0038 | 3700.0 | 7400 | 2.1291 | 0.7591 | 0.7591 | | 0.0038 | 3800.0 | 7600 | 2.2240 | 0.7591 | 0.7591 | | 0.0036 | 3900.0 | 7800 | 2.2950 | 0.7683 | 0.7683 | | 0.0039 | 4000.0 | 8000 | 2.1987 | 0.7591 | 0.7591 | | 0.0035 | 4100.0 | 8200 | 2.2783 | 0.7621 | 0.7622 | | 0.0036 | 4200.0 | 8400 | 2.2651 | 0.7591 | 0.7591 | | 0.0032 | 4300.0 | 8600 | 2.2795 | 0.7591 | 0.7591 | | 0.003 | 4400.0 | 8800 | 2.3454 | 0.7591 | 0.7591 | | 0.0032 | 4500.0 | 9000 | 2.3081 | 0.7591 | 0.7591 | | 0.003 | 4600.0 | 9200 | 2.2963 | 0.7652 | 0.7652 | | 0.0027 | 4700.0 | 9400 | 2.3278 | 0.7622 | 0.7622 | | 0.0028 | 4800.0 | 9600 | 2.3769 | 0.7622 | 0.7622 | | 0.0026 | 4900.0 | 9800 | 2.3410 | 0.7622 | 0.7622 | | 0.003 | 5000.0 | 10000 | 2.3142 | 0.7622 | 0.7622 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_2-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T04:59:31+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_mouse\_2-seqsight\_65536\_512\_47M-L32\_all ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset. It achieves the following results on the evaluation set: * Loss: 1.4923 * F1 Score: 0.8199 * Accuracy: 0.8201 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # with_board_only_history This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.2 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "with_board_only_history", "results": []}]}
wenshicheng97/with_board_only_history
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
2024-04-17T04:59:39+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
# with_board_only_history This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.2 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# with_board_only_history\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.41e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.2\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n", "# with_board_only_history\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.41e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.2\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.9268 - F1 Score: 0.6104 - Accuracy: 0.6192 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.9721 | 11.11 | 200 | 0.9267 | 0.4980 | 0.5971 | | 0.8957 | 22.22 | 400 | 0.8953 | 0.5500 | 0.6118 | | 0.8593 | 33.33 | 600 | 0.8799 | 0.5755 | 0.6127 | | 0.8267 | 44.44 | 800 | 0.8708 | 0.5888 | 0.6072 | | 0.7956 | 55.56 | 1000 | 0.8686 | 0.5976 | 0.6125 | | 0.7723 | 66.67 | 1200 | 0.8550 | 0.5989 | 0.6230 | | 0.7545 | 77.78 | 1400 | 0.8607 | 0.6054 | 0.6247 | | 0.7387 | 88.89 | 1600 | 0.8554 | 0.6105 | 0.6228 | | 0.7276 | 100.0 | 1800 | 0.8702 | 0.6127 | 0.6184 | | 0.7202 | 111.11 | 2000 | 0.8810 | 0.6117 | 0.6289 | | 0.7124 | 122.22 | 2200 | 0.8644 | 0.6137 | 0.6252 | | 0.7053 | 133.33 | 2400 | 0.8663 | 0.6108 | 0.6219 | | 0.6987 | 144.44 | 2600 | 0.8684 | 0.6117 | 0.6241 | | 0.6928 | 155.56 | 2800 | 0.8729 | 0.6143 | 0.6249 | | 0.6881 | 166.67 | 3000 | 0.8677 | 0.6173 | 0.6278 | | 0.6827 | 177.78 | 3200 | 0.8776 | 0.6100 | 0.6162 | | 0.6763 | 188.89 | 3400 | 0.8687 | 0.6158 | 0.6315 | | 0.6711 | 200.0 | 3600 | 0.8735 | 0.6105 | 0.6317 | | 0.6663 | 211.11 | 3800 | 0.8725 | 0.6116 | 0.6326 | | 0.6597 | 222.22 | 4000 | 0.8821 | 0.6144 | 0.6265 | | 0.6552 | 233.33 | 4200 | 0.8672 | 0.6105 | 0.6245 | | 0.6488 | 244.44 | 4400 | 0.8847 | 0.6098 | 0.6282 | | 0.6438 | 255.56 | 4600 | 0.8961 | 0.6104 | 0.6225 | | 0.6393 | 266.67 | 4800 | 0.8717 | 0.6112 | 0.6263 | | 0.6323 | 277.78 | 5000 | 0.8906 | 0.6062 | 0.6285 | | 0.6264 | 288.89 | 5200 | 0.8846 | 0.6165 | 0.6278 | | 0.6219 | 300.0 | 5400 | 0.9003 | 0.6087 | 0.6293 | | 0.6154 | 311.11 | 5600 | 0.8922 | 0.6179 | 0.6337 | | 0.6114 | 322.22 | 5800 | 0.9030 | 0.6138 | 0.6285 | | 0.6065 | 333.33 | 6000 | 0.8958 | 0.6115 | 0.6219 | | 0.5997 | 344.44 | 6200 | 0.9092 | 0.6109 | 0.6265 | | 0.5949 | 355.56 | 6400 | 0.9194 | 0.6131 | 0.6263 | | 0.5914 | 366.67 | 6600 | 0.9015 | 0.6142 | 0.6258 | | 0.587 | 377.78 | 6800 | 0.9139 | 0.6155 | 0.6300 | | 0.5821 | 388.89 | 7000 | 0.9148 | 0.6151 | 0.6313 | | 0.5768 | 400.0 | 7200 | 0.8992 | 0.6140 | 0.6282 | | 0.5746 | 411.11 | 7400 | 0.9159 | 0.6131 | 0.6260 | | 0.5715 | 422.22 | 7600 | 0.9260 | 0.6165 | 0.6291 | | 0.5677 | 433.33 | 7800 | 0.9193 | 0.6164 | 0.6293 | | 0.5632 | 444.44 | 8000 | 0.9310 | 0.6127 | 0.6238 | | 0.5606 | 455.56 | 8200 | 0.9283 | 0.6201 | 0.6291 | | 0.5606 | 466.67 | 8400 | 0.9315 | 0.6165 | 0.6304 | | 0.5562 | 477.78 | 8600 | 0.9282 | 0.6156 | 0.6258 | | 0.5534 | 488.89 | 8800 | 0.9374 | 0.6155 | 0.6247 | | 0.5526 | 500.0 | 9000 | 0.9272 | 0.6155 | 0.6258 | | 0.552 | 511.11 | 9200 | 0.9341 | 0.6163 | 0.6256 | | 0.5487 | 522.22 | 9400 | 0.9343 | 0.6157 | 0.6274 | | 0.5478 | 533.33 | 9600 | 0.9328 | 0.6143 | 0.6258 | | 0.5469 | 544.44 | 9800 | 0.9341 | 0.6163 | 0.6276 | | 0.5472 | 555.56 | 10000 | 0.9338 | 0.6155 | 0.6269 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T05:00:27+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_splice\_reconstructed-seqsight\_65536\_512\_47M-L32\_all ============================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset. It achieves the following results on the evaluation set: * Loss: 0.9268 * F1 Score: 0.6104 * Accuracy: 0.6192 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "CartPole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "484.90 +/- 27.35", "name": "mean_reward", "verified": false}]}]}]}
MLIsaac/CartPole-v1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-17T05:02:10+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-poison-1p This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9307 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8526 | 1.0 | 650 | 0.9307 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama-poison-1p", "results": []}]}
terry69/llama-poison-1p
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-04-17T05:03:26+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #license-llama2 #region-us
llama-poison-1p =============== This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.9307 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 2 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.8.2 * Transformers 4.38.2 * Pytorch 2.2.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.8.2\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.8.2\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.6076 - F1 Score: 0.7066 - Accuracy: 0.707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1536 - eval_batch_size: 1536 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6651 | 9.09 | 200 | 0.6427 | 0.6062 | 0.608 | | 0.6219 | 18.18 | 400 | 0.6256 | 0.6414 | 0.645 | | 0.5976 | 27.27 | 600 | 0.6186 | 0.6559 | 0.656 | | 0.5747 | 36.36 | 800 | 0.6172 | 0.6667 | 0.667 | | 0.5532 | 45.45 | 1000 | 0.6054 | 0.6733 | 0.674 | | 0.5407 | 54.55 | 1200 | 0.6141 | 0.6799 | 0.68 | | 0.5301 | 63.64 | 1400 | 0.6026 | 0.6769 | 0.677 | | 0.524 | 72.73 | 1600 | 0.6120 | 0.6810 | 0.681 | | 0.5154 | 81.82 | 1800 | 0.6185 | 0.6699 | 0.67 | | 0.5118 | 90.91 | 2000 | 0.6064 | 0.6791 | 0.679 | | 0.5066 | 100.0 | 2200 | 0.5975 | 0.6830 | 0.683 | | 0.503 | 109.09 | 2400 | 0.6072 | 0.6841 | 0.684 | | 0.4976 | 118.18 | 2600 | 0.6079 | 0.6861 | 0.686 | | 0.4953 | 127.27 | 2800 | 0.6143 | 0.6808 | 0.681 | | 0.4889 | 136.36 | 3000 | 0.6138 | 0.6780 | 0.678 | | 0.4872 | 145.45 | 3200 | 0.6128 | 0.6791 | 0.679 | | 0.4825 | 154.55 | 3400 | 0.5949 | 0.6890 | 0.689 | | 0.4788 | 163.64 | 3600 | 0.6213 | 0.6829 | 0.683 | | 0.4757 | 172.73 | 3800 | 0.6170 | 0.6791 | 0.679 | | 0.4731 | 181.82 | 4000 | 0.6225 | 0.6841 | 0.684 | | 0.4712 | 190.91 | 4200 | 0.6073 | 0.7001 | 0.7 | | 0.4673 | 200.0 | 4400 | 0.6115 | 0.6861 | 0.686 | | 0.4661 | 209.09 | 4600 | 0.6274 | 0.6874 | 0.688 | | 0.4604 | 218.18 | 4800 | 0.6110 | 0.6981 | 0.698 | | 0.4574 | 227.27 | 5000 | 0.6223 | 0.7011 | 0.701 | | 0.4542 | 236.36 | 5200 | 0.6262 | 0.6941 | 0.694 | | 0.4538 | 245.45 | 5400 | 0.6177 | 0.7051 | 0.705 | | 0.4489 | 254.55 | 5600 | 0.6289 | 0.7001 | 0.7 | | 0.4459 | 263.64 | 5800 | 0.6204 | 0.6951 | 0.695 | | 0.4451 | 272.73 | 6000 | 0.6252 | 0.6969 | 0.697 | | 0.4397 | 281.82 | 6200 | 0.6261 | 0.6971 | 0.697 | | 0.4378 | 290.91 | 6400 | 0.6187 | 0.6971 | 0.697 | | 0.4344 | 300.0 | 6600 | 0.6315 | 0.7050 | 0.705 | | 0.434 | 309.09 | 6800 | 0.6305 | 0.6990 | 0.699 | | 0.4323 | 318.18 | 7000 | 0.6320 | 0.6970 | 0.697 | | 0.4307 | 327.27 | 7200 | 0.6208 | 0.6941 | 0.694 | | 0.4281 | 336.36 | 7400 | 0.6259 | 0.6951 | 0.695 | | 0.4248 | 345.45 | 7600 | 0.6360 | 0.7001 | 0.7 | | 0.4225 | 354.55 | 7800 | 0.6360 | 0.7000 | 0.7 | | 0.4221 | 363.64 | 8000 | 0.6406 | 0.6908 | 0.691 | | 0.4196 | 372.73 | 8200 | 0.6338 | 0.6991 | 0.699 | | 0.4206 | 381.82 | 8400 | 0.6294 | 0.6971 | 0.697 | | 0.4175 | 390.91 | 8600 | 0.6295 | 0.6981 | 0.698 | | 0.4174 | 400.0 | 8800 | 0.6390 | 0.696 | 0.696 | | 0.4157 | 409.09 | 9000 | 0.6319 | 0.6951 | 0.695 | | 0.4133 | 418.18 | 9200 | 0.6440 | 0.6908 | 0.691 | | 0.4139 | 427.27 | 9400 | 0.6404 | 0.6931 | 0.693 | | 0.4136 | 436.36 | 9600 | 0.6409 | 0.6920 | 0.692 | | 0.414 | 445.45 | 9800 | 0.6435 | 0.6950 | 0.695 | | 0.4117 | 454.55 | 10000 | 0.6437 | 0.6930 | 0.693 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_0-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T05:03:35+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_tf\_0-seqsight\_65536\_512\_47M-L32\_all ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.6076 * F1 Score: 0.7066 * Accuracy: 0.707 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 1536 * eval\_batch\_size: 1536 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
# T3qMergerix-7B T3qMergerix-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO - model: MiniMoog/Mergerix-7b-v0.3 merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/T3qMergerix-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
automerger/T3qMergerix-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "region:us" ]
null
2024-04-17T05:04:10+00:00
[]
[]
TAGS #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
# T3qMergerix-7B T3qMergerix-7B is an automated merge created by Maxime Labonne using the following configuration. ## Configuration ## Usage
[ "# T3qMergerix-7B\n\nT3qMergerix-7B is an automated merge created by Maxime Labonne using the following configuration.", "## Configuration", "## Usage" ]
[ "TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n", "# T3qMergerix-7B\n\nT3qMergerix-7B is an automated merge created by Maxime Labonne using the following configuration.", "## Configuration", "## Usage" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5477 - F1 Score: 0.7309 - Accuracy: 0.731 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6592 | 13.33 | 200 | 0.6495 | 0.6033 | 0.607 | | 0.6107 | 26.67 | 400 | 0.6363 | 0.6399 | 0.64 | | 0.5799 | 40.0 | 600 | 0.6327 | 0.6369 | 0.637 | | 0.5506 | 53.33 | 800 | 0.6434 | 0.6484 | 0.654 | | 0.5313 | 66.67 | 1000 | 0.6420 | 0.6538 | 0.654 | | 0.5205 | 80.0 | 1200 | 0.6391 | 0.6540 | 0.654 | | 0.5121 | 93.33 | 1400 | 0.6552 | 0.6500 | 0.65 | | 0.506 | 106.67 | 1600 | 0.6545 | 0.6498 | 0.65 | | 0.5004 | 120.0 | 1800 | 0.6450 | 0.6490 | 0.649 | | 0.4956 | 133.33 | 2000 | 0.6594 | 0.6573 | 0.658 | | 0.4913 | 146.67 | 2200 | 0.6655 | 0.6543 | 0.655 | | 0.4853 | 160.0 | 2400 | 0.6853 | 0.6550 | 0.655 | | 0.4795 | 173.33 | 2600 | 0.6759 | 0.6636 | 0.664 | | 0.4731 | 186.67 | 2800 | 0.6927 | 0.6556 | 0.656 | | 0.4688 | 200.0 | 3000 | 0.7036 | 0.6690 | 0.669 | | 0.4642 | 213.33 | 3200 | 0.7004 | 0.6579 | 0.658 | | 0.4583 | 226.67 | 3400 | 0.6976 | 0.6557 | 0.656 | | 0.4529 | 240.0 | 3600 | 0.7143 | 0.6559 | 0.656 | | 0.449 | 253.33 | 3800 | 0.7127 | 0.6477 | 0.648 | | 0.4429 | 266.67 | 4000 | 0.7309 | 0.6578 | 0.658 | | 0.4371 | 280.0 | 4200 | 0.7469 | 0.6514 | 0.652 | | 0.4317 | 293.33 | 4400 | 0.7238 | 0.6510 | 0.651 | | 0.4266 | 306.67 | 4600 | 0.7404 | 0.6530 | 0.653 | | 0.4216 | 320.0 | 4800 | 0.7518 | 0.6498 | 0.65 | | 0.4165 | 333.33 | 5000 | 0.7623 | 0.6488 | 0.649 | | 0.4119 | 346.67 | 5200 | 0.7583 | 0.6430 | 0.644 | | 0.4069 | 360.0 | 5400 | 0.7826 | 0.6324 | 0.634 | | 0.4046 | 373.33 | 5600 | 0.7873 | 0.6470 | 0.647 | | 0.3982 | 386.67 | 5800 | 0.7936 | 0.6450 | 0.645 | | 0.3961 | 400.0 | 6000 | 0.7770 | 0.6400 | 0.64 | | 0.3908 | 413.33 | 6200 | 0.7884 | 0.6448 | 0.645 | | 0.3876 | 426.67 | 6400 | 0.7895 | 0.6470 | 0.647 | | 0.3831 | 440.0 | 6600 | 0.7965 | 0.6450 | 0.645 | | 0.3799 | 453.33 | 6800 | 0.8196 | 0.6509 | 0.651 | | 0.3769 | 466.67 | 7000 | 0.7986 | 0.6350 | 0.635 | | 0.3748 | 480.0 | 7200 | 0.8324 | 0.64 | 0.64 | | 0.3713 | 493.33 | 7400 | 0.8162 | 0.6410 | 0.641 | | 0.3681 | 506.67 | 7600 | 0.8072 | 0.6409 | 0.641 | | 0.3674 | 520.0 | 7800 | 0.8191 | 0.6458 | 0.646 | | 0.3641 | 533.33 | 8000 | 0.8127 | 0.6460 | 0.646 | | 0.3622 | 546.67 | 8200 | 0.8402 | 0.6440 | 0.644 | | 0.3613 | 560.0 | 8400 | 0.8076 | 0.6400 | 0.64 | | 0.3584 | 573.33 | 8600 | 0.8270 | 0.6490 | 0.649 | | 0.3567 | 586.67 | 8800 | 0.8132 | 0.6530 | 0.653 | | 0.3568 | 600.0 | 9000 | 0.8259 | 0.644 | 0.644 | | 0.3553 | 613.33 | 9200 | 0.8248 | 0.6498 | 0.65 | | 0.3548 | 626.67 | 9400 | 0.8155 | 0.6450 | 0.645 | | 0.3529 | 640.0 | 9600 | 0.8233 | 0.6500 | 0.65 | | 0.352 | 653.33 | 9800 | 0.8226 | 0.6490 | 0.649 | | 0.3511 | 666.67 | 10000 | 0.8218 | 0.6460 | 0.646 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_1-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T05:04:13+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_tf\_1-seqsight\_65536\_512\_47M-L32\_all ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.5477 * F1 Score: 0.7309 * Accuracy: 0.731 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - iamkaikai/CONCEPT-DESIGN-LORA These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the iamkaikai/CONCEPT-DESIGN-ART dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ![img_4](./image_4.png) ![img_5](./image_5.png) ![img_6](./image_6.png) ![img_7](./image_7.png) ![img_8](./image_8.png) ![img_9](./image_9.png) ![img_10](./image_10.png) ![img_11](./image_11.png) ![img_12](./image_12.png) ![img_13](./image_13.png) ![img_14](./image_14.png) ![img_15](./image_15.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true}
iamkaikai/CONCEPT-DESIGN-LORA
null
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
null
2024-04-17T05:04:27+00:00
[]
[]
TAGS #diffusers #stable-diffusion #stable-diffusion-diffusers #text-to-image #diffusers-training #lora #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us
# LoRA text2image fine-tuning - iamkaikai/CONCEPT-DESIGN-LORA These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the iamkaikai/CONCEPT-DESIGN-ART dataset. You can find some example images in the following. !img_0 !img_1 !img_2 !img_3 !img_4 !img_5 !img_6 !img_7 !img_8 !img_9 !img_10 !img_11 !img_12 !img_13 !img_14 !img_15 ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# LoRA text2image fine-tuning - iamkaikai/CONCEPT-DESIGN-LORA\nThese are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the iamkaikai/CONCEPT-DESIGN-ART dataset. You can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n!img_4\n!img_5\n!img_6\n!img_7\n!img_8\n!img_9\n!img_10\n!img_11\n!img_12\n!img_13\n!img_14\n!img_15", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #stable-diffusion #stable-diffusion-diffusers #text-to-image #diffusers-training #lora #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us \n", "# LoRA text2image fine-tuning - iamkaikai/CONCEPT-DESIGN-LORA\nThese are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the iamkaikai/CONCEPT-DESIGN-ART dataset. You can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n!img_4\n!img_5\n!img_6\n!img_7\n!img_8\n!img_9\n!img_10\n!img_11\n!img_12\n!img_13\n!img_14\n!img_15", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
daanjiri/Biomistral_7b_bhc_full
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T05:05:24+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505-Dev-CSI-PhoBERT_base This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "vinai/phobert-base", "model-index": [{"name": "CS505-Dev-CSI-PhoBERT_base", "results": []}]}
ThuyNT/CS505-Dev-CSI-PhoBERT_base
null
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:vinai/phobert-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:06:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-vinai/phobert-base #autotrain_compatible #endpoints_compatible #region-us
# CS505-Dev-CSI-PhoBERT_base This model is a fine-tuned version of vinai/phobert-base on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505-Dev-CSI-PhoBERT_base\n\nThis model is a fine-tuned version of vinai/phobert-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-vinai/phobert-base #autotrain_compatible #endpoints_compatible #region-us \n", "# CS505-Dev-CSI-PhoBERT_base\n\nThis model is a fine-tuned version of vinai/phobert-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MayurPai/Llama-2-7b-hf-fine-tuned
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:07:11+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
null
## Llamacpp Quantizations of CodeQwen1.5-7B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> PR <a href="https://github.com/ggerganov/llama.cpp/pull/6707">6707</a> for quantization. Original model: https://huggingface.co/Qwen/CodeQwen1.5-7B All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [CodeQwen1.5-7B-Q8_0.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-Q8_0.gguf) | Q8_0 | 7.70GB | Extremely high quality, generally unneeded but max available quant. | | [CodeQwen1.5-7B-Q6_K.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-Q6_K.gguf) | Q6_K | 6.37GB | Very high quality, near perfect, *recommended*. | | [CodeQwen1.5-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-Q5_K_M.gguf) | Q5_K_M | 5.42GB | High quality, *recommended*. | | [CodeQwen1.5-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-Q5_K_S.gguf) | Q5_K_S | 5.14GB | High quality, *recommended*. | | [CodeQwen1.5-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-Q4_K_M.gguf) | Q4_K_M | 4.73GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [CodeQwen1.5-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-Q4_K_S.gguf) | Q4_K_S | 4.41GB | Slightly lower quality with more space savings, *recommended*. | | [CodeQwen1.5-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ4_NL.gguf) | IQ4_NL | 4.18GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [CodeQwen1.5-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ4_XS.gguf) | IQ4_XS | 4.03GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [CodeQwen1.5-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-Q3_K_L.gguf) | Q3_K_L | 3.98GB | Lower quality but usable, good for low RAM availability. | | [CodeQwen1.5-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-Q3_K_M.gguf) | Q3_K_M | 3.80GB | Even lower quality. | | [CodeQwen1.5-7B-IQ3_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ3_M.gguf) | IQ3_M | 3.60GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [CodeQwen1.5-7B-IQ3_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ3_S.gguf) | IQ3_S | 3.50GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [CodeQwen1.5-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-Q3_K_S.gguf) | Q3_K_S | 3.50GB | Low quality, not recommended. | | [CodeQwen1.5-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ3_XS.gguf) | IQ3_XS | 3.35GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [CodeQwen1.5-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ3_XXS.gguf) | IQ3_XXS | 3.22GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [CodeQwen1.5-7B-Q2_K.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-Q2_K.gguf) | Q2_K | 3.05GB | Very low quality but surprisingly usable. | | [CodeQwen1.5-7B-IQ2_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ2_M.gguf) | IQ2_M | 3.00GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [CodeQwen1.5-7B-IQ2_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ2_S.gguf) | IQ2_S | 2.87GB | Very low quality, uses SOTA techniques to be usable. | | [CodeQwen1.5-7B-IQ2_XS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ2_XS.gguf) | IQ2_XS | 2.76GB | Very low quality, uses SOTA techniques to be usable. | | [CodeQwen1.5-7B-IQ2_XXS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ2_XXS.gguf) | IQ2_XXS | 2.61GB | Lower quality, uses SOTA techniques to be usable. | | [CodeQwen1.5-7B-IQ1_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ1_M.gguf) | IQ1_M | 2.45GB | Extremely low quality, *not* recommended. | | [CodeQwen1.5-7B-IQ1_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-GGUF/blob/main/CodeQwen1.5-7B-IQ1_S.gguf) | IQ1_S | 2.36GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "other", "tags": ["pretrained"], "license_name": "tongyi-qianwen-research", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "quantized_by": "bartowski"}
bartowski/CodeQwen1.5-7B-GGUF
null
[ "gguf", "pretrained", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-17T05:08:04+00:00
[]
[ "en" ]
TAGS #gguf #pretrained #text-generation #en #license-other #region-us
Llamacpp Quantizations of CodeQwen1.5-7B ---------------------------------------- Using <a href="URL PR <a href="URL for quantization. Original model: URL All quants made using imatrix option with dataset provided by Kalomaze here Prompt format ------------- Download a file (not the whole branch) from below: -------------------------------------------------- Which file should I choose? --------------------------- A great write up with charts showing various performances is provided by Artefact2 here The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: URL feature matrix But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#gguf #pretrained #text-generation #en #license-other #region-us \n" ]
text-generation
transformers
# zephyr-beta-wizardLM-2-merge-7B This is a merge of two pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The goal was to explore the impact of merging on reasoning and narrative generation. Given that both fine-tuned models are BF16 precision (despite the precision of the base model Mistral 7B v0.1 being FP16), this avoids issues with mixed precision during merging. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [lucyknada/microsoft_WizardLM-2-7B](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B) * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: HuggingFaceH4/zephyr-7b-beta layer_range: [0,32] - model: lucyknada/microsoft_WizardLM-2-7B layer_range: [0,32] merge_method: slerp base_model: HuggingFaceH4/zephyr-7b-beta parameters: t: - value: 0.5 dtype: bfloat16 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["lucyknada/microsoft_WizardLM-2-7B", "HuggingFaceH4/zephyr-7b-beta"], "pipeline_tag": "text-generation"}
grimjim/zephyr-beta-wizardLM-2-merge-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:lucyknada/microsoft_WizardLM-2-7B", "base_model:HuggingFaceH4/zephyr-7b-beta", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T05:09:40+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-lucyknada/microsoft_WizardLM-2-7B #base_model-HuggingFaceH4/zephyr-7b-beta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# zephyr-beta-wizardLM-2-merge-7B This is a merge of two pre-trained language models created using mergekit. The goal was to explore the impact of merging on reasoning and narrative generation. Given that both fine-tuned models are BF16 precision (despite the precision of the base model Mistral 7B v0.1 being FP16), this avoids issues with mixed precision during merging. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * lucyknada/microsoft_WizardLM-2-7B * HuggingFaceH4/zephyr-7b-beta ### Configuration The following YAML configuration was used to produce this model:
[ "# zephyr-beta-wizardLM-2-merge-7B\n\nThis is a merge of two pre-trained language models created using mergekit. The goal was to explore the impact of merging on reasoning and narrative generation. Given that both fine-tuned models are BF16 precision (despite the precision of the base model Mistral 7B v0.1 being FP16), this avoids issues with mixed precision during merging.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* lucyknada/microsoft_WizardLM-2-7B\n* HuggingFaceH4/zephyr-7b-beta", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-lucyknada/microsoft_WizardLM-2-7B #base_model-HuggingFaceH4/zephyr-7b-beta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# zephyr-beta-wizardLM-2-merge-7B\n\nThis is a merge of two pre-trained language models created using mergekit. The goal was to explore the impact of merging on reasoning and narrative generation. Given that both fine-tuned models are BF16 precision (despite the precision of the base model Mistral 7B v0.1 being FP16), this avoids issues with mixed precision during merging.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* lucyknada/microsoft_WizardLM-2-7B\n* HuggingFaceH4/zephyr-7b-beta", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 1.0826 - F1 Score: 0.6434 - Accuracy: 0.647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6429 | 20.0 | 200 | 0.6242 | 0.6495 | 0.65 | | 0.5598 | 40.0 | 400 | 0.5938 | 0.6990 | 0.699 | | 0.5 | 60.0 | 600 | 0.5678 | 0.7125 | 0.715 | | 0.4497 | 80.0 | 800 | 0.5621 | 0.7270 | 0.727 | | 0.4262 | 100.0 | 1000 | 0.5559 | 0.7432 | 0.744 | | 0.4118 | 120.0 | 1200 | 0.5539 | 0.7485 | 0.75 | | 0.3982 | 140.0 | 1400 | 0.5559 | 0.7379 | 0.738 | | 0.39 | 160.0 | 1600 | 0.5506 | 0.7376 | 0.738 | | 0.3813 | 180.0 | 1800 | 0.5543 | 0.7500 | 0.751 | | 0.3721 | 200.0 | 2000 | 0.5692 | 0.7418 | 0.742 | | 0.3627 | 220.0 | 2200 | 0.5774 | 0.7394 | 0.741 | | 0.3552 | 240.0 | 2400 | 0.5622 | 0.7492 | 0.75 | | 0.3475 | 260.0 | 2600 | 0.5459 | 0.7529 | 0.753 | | 0.3372 | 280.0 | 2800 | 0.5509 | 0.7562 | 0.757 | | 0.3274 | 300.0 | 3000 | 0.5506 | 0.7618 | 0.762 | | 0.3182 | 320.0 | 3200 | 0.5787 | 0.7554 | 0.758 | | 0.3076 | 340.0 | 3400 | 0.5501 | 0.7782 | 0.779 | | 0.2999 | 360.0 | 3600 | 0.5493 | 0.7640 | 0.766 | | 0.2889 | 380.0 | 3800 | 0.5461 | 0.7793 | 0.78 | | 0.2791 | 400.0 | 4000 | 0.5430 | 0.7828 | 0.783 | | 0.2711 | 420.0 | 4200 | 0.5613 | 0.7844 | 0.786 | | 0.2613 | 440.0 | 4400 | 0.5767 | 0.7811 | 0.783 | | 0.2525 | 460.0 | 4600 | 0.5546 | 0.7789 | 0.781 | | 0.2441 | 480.0 | 4800 | 0.5489 | 0.7917 | 0.793 | | 0.2355 | 500.0 | 5000 | 0.5749 | 0.7831 | 0.785 | | 0.2295 | 520.0 | 5200 | 0.5618 | 0.7925 | 0.794 | | 0.2219 | 540.0 | 5400 | 0.5502 | 0.8067 | 0.807 | | 0.2162 | 560.0 | 5600 | 0.5644 | 0.7957 | 0.797 | | 0.2106 | 580.0 | 5800 | 0.5789 | 0.8058 | 0.807 | | 0.2077 | 600.0 | 6000 | 0.5623 | 0.8074 | 0.808 | | 0.1995 | 620.0 | 6200 | 0.5720 | 0.8083 | 0.809 | | 0.1954 | 640.0 | 6400 | 0.5754 | 0.8072 | 0.808 | | 0.1907 | 660.0 | 6600 | 0.5907 | 0.8071 | 0.808 | | 0.1859 | 680.0 | 6800 | 0.5828 | 0.8091 | 0.81 | | 0.183 | 700.0 | 7000 | 0.5844 | 0.8153 | 0.816 | | 0.1777 | 720.0 | 7200 | 0.5739 | 0.8196 | 0.82 | | 0.1752 | 740.0 | 7400 | 0.6080 | 0.8060 | 0.807 | | 0.1738 | 760.0 | 7600 | 0.6083 | 0.8036 | 0.805 | | 0.1711 | 780.0 | 7800 | 0.6113 | 0.8121 | 0.813 | | 0.1684 | 800.0 | 8000 | 0.6043 | 0.8120 | 0.813 | | 0.1669 | 820.0 | 8200 | 0.6051 | 0.8112 | 0.812 | | 0.164 | 840.0 | 8400 | 0.6015 | 0.8133 | 0.814 | | 0.1612 | 860.0 | 8600 | 0.6188 | 0.8124 | 0.813 | | 0.1595 | 880.0 | 8800 | 0.6013 | 0.8123 | 0.813 | | 0.1576 | 900.0 | 9000 | 0.5933 | 0.8164 | 0.817 | | 0.1579 | 920.0 | 9200 | 0.6078 | 0.8081 | 0.809 | | 0.1551 | 940.0 | 9400 | 0.6100 | 0.8132 | 0.814 | | 0.1543 | 960.0 | 9600 | 0.6119 | 0.8111 | 0.812 | | 0.1545 | 980.0 | 9800 | 0.6110 | 0.8112 | 0.812 | | 0.1536 | 1000.0 | 10000 | 0.6102 | 0.8122 | 0.813 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_4-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T05:10:21+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_tf\_4-seqsight\_65536\_512\_47M-L32\_all ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset. It achieves the following results on the evaluation set: * Loss: 1.0826 * F1 Score: 0.6434 * Accuracy: 0.647 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/JoPmt/Hermetic-Llama-Ties <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.IQ3_XS.gguf) | IQ3_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.IQ3_S.gguf) | IQ3_S | 0.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.IQ3_M.gguf) | IQ3_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.Q5_K_S.gguf) | Q5_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.Q5_K_M.gguf) | Q5_K_M | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.Q6_K.gguf) | Q6_K | 0.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hermetic-Llama-Ties-GGUF/resolve/main/Hermetic-Llama-Ties.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "BEE-spoke-data/smol_llama-220M-openhermes", "BEE-spoke-data/smol_llama-220M-GQA"], "base_model": "JoPmt/Hermetic-Llama-Ties", "quantized_by": "mradermacher"}
mradermacher/Hermetic-Llama-Ties-GGUF
null
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "BEE-spoke-data/smol_llama-220M-openhermes", "BEE-spoke-data/smol_llama-220M-GQA", "en", "base_model:JoPmt/Hermetic-Llama-Ties", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:10:43+00:00
[]
[ "en" ]
TAGS #transformers #gguf #merge #mergekit #lazymergekit #BEE-spoke-data/smol_llama-220M-openhermes #BEE-spoke-data/smol_llama-220M-GQA #en #base_model-JoPmt/Hermetic-Llama-Ties #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #merge #mergekit #lazymergekit #BEE-spoke-data/smol_llama-220M-openhermes #BEE-spoke-data/smol_llama-220M-GQA #en #base_model-JoPmt/Hermetic-Llama-Ties #endpoints_compatible #region-us \n" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "stabilityai/stablelm-3b-4e1t"}
AY2324S2-CS4248-Team-47/StableLM-DPO-Ultrafeedback
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:stabilityai/stablelm-3b-4e1t", "region:us" ]
null
2024-04-17T05:11:13+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-stabilityai/stablelm-3b-4e1t #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-stabilityai/stablelm-3b-4e1t #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.6970 - F1 Score: 0.5785 - Accuracy: 0.58 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6775 | 14.29 | 200 | 0.6530 | 0.5861 | 0.599 | | 0.6521 | 28.57 | 400 | 0.6515 | 0.5891 | 0.593 | | 0.6332 | 42.86 | 600 | 0.6520 | 0.5893 | 0.601 | | 0.6109 | 57.14 | 800 | 0.6521 | 0.6138 | 0.618 | | 0.585 | 71.43 | 1000 | 0.6535 | 0.6204 | 0.621 | | 0.5689 | 85.71 | 1200 | 0.6656 | 0.6290 | 0.629 | | 0.5582 | 100.0 | 1400 | 0.6678 | 0.6236 | 0.624 | | 0.5492 | 114.29 | 1600 | 0.6860 | 0.6127 | 0.616 | | 0.5434 | 128.57 | 1800 | 0.6744 | 0.6112 | 0.612 | | 0.5358 | 142.86 | 2000 | 0.6770 | 0.6166 | 0.617 | | 0.528 | 157.14 | 2200 | 0.6876 | 0.6151 | 0.615 | | 0.5226 | 171.43 | 2400 | 0.7186 | 0.6139 | 0.614 | | 0.5156 | 185.71 | 2600 | 0.7043 | 0.6091 | 0.61 | | 0.5071 | 200.0 | 2800 | 0.7230 | 0.6170 | 0.62 | | 0.502 | 214.29 | 3000 | 0.7309 | 0.6030 | 0.603 | | 0.4934 | 228.57 | 3200 | 0.7531 | 0.6029 | 0.604 | | 0.4861 | 242.86 | 3400 | 0.7478 | 0.6089 | 0.609 | | 0.4796 | 257.14 | 3600 | 0.7654 | 0.6181 | 0.618 | | 0.4725 | 271.43 | 3800 | 0.7692 | 0.6159 | 0.616 | | 0.4676 | 285.71 | 4000 | 0.7616 | 0.6001 | 0.6 | | 0.4604 | 300.0 | 4200 | 0.7514 | 0.6021 | 0.602 | | 0.4534 | 314.29 | 4400 | 0.7611 | 0.6120 | 0.612 | | 0.4481 | 328.57 | 4600 | 0.7757 | 0.6117 | 0.612 | | 0.4428 | 342.86 | 4800 | 0.7963 | 0.6021 | 0.602 | | 0.4388 | 357.14 | 5000 | 0.8140 | 0.6110 | 0.611 | | 0.4297 | 371.43 | 5200 | 0.8055 | 0.6081 | 0.608 | | 0.4241 | 385.71 | 5400 | 0.8102 | 0.6159 | 0.616 | | 0.4198 | 400.0 | 5600 | 0.8355 | 0.6021 | 0.602 | | 0.4142 | 414.29 | 5800 | 0.8202 | 0.6120 | 0.612 | | 0.4114 | 428.57 | 6000 | 0.8378 | 0.6069 | 0.607 | | 0.4076 | 442.86 | 6200 | 0.8493 | 0.5916 | 0.593 | | 0.4017 | 457.14 | 6400 | 0.8281 | 0.6123 | 0.613 | | 0.3977 | 471.43 | 6600 | 0.8478 | 0.5999 | 0.6 | | 0.3934 | 485.71 | 6800 | 0.8371 | 0.6145 | 0.615 | | 0.3897 | 500.0 | 7000 | 0.8405 | 0.6051 | 0.605 | | 0.3863 | 514.29 | 7200 | 0.8297 | 0.6081 | 0.608 | | 0.3829 | 528.57 | 7400 | 0.8615 | 0.6051 | 0.605 | | 0.3795 | 542.86 | 7600 | 0.8482 | 0.6041 | 0.604 | | 0.3775 | 557.14 | 7800 | 0.8614 | 0.6101 | 0.61 | | 0.3733 | 571.43 | 8000 | 0.8678 | 0.6081 | 0.608 | | 0.3708 | 585.71 | 8200 | 0.8759 | 0.6101 | 0.61 | | 0.3697 | 600.0 | 8400 | 0.8474 | 0.6140 | 0.614 | | 0.3678 | 614.29 | 8600 | 0.8764 | 0.5986 | 0.599 | | 0.3661 | 628.57 | 8800 | 0.8847 | 0.6071 | 0.607 | | 0.363 | 642.86 | 9000 | 0.8804 | 0.6151 | 0.615 | | 0.3619 | 657.14 | 9200 | 0.8750 | 0.6131 | 0.613 | | 0.3616 | 671.43 | 9400 | 0.8799 | 0.6101 | 0.61 | | 0.3588 | 685.71 | 9600 | 0.8777 | 0.6061 | 0.606 | | 0.3598 | 700.0 | 9800 | 0.8793 | 0.6010 | 0.601 | | 0.3598 | 714.29 | 10000 | 0.8761 | 0.6041 | 0.604 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_3-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T05:12:05+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_tf\_3-seqsight\_65536\_512\_47M-L32\_all ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.6970 * F1 Score: 0.5785 * Accuracy: 0.58 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chemical-ner-bert-large-uncased-5 This model is a fine-tuned version of [google-bert/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0906 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1366 | 1.0 | 852 | 0.0949 | | 0.0815 | 2.0 | 1704 | 0.0850 | | 0.065 | 3.0 | 2556 | 0.0906 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google-bert/bert-large-uncased", "model-index": [{"name": "chemical-ner-bert-large-uncased-5", "results": []}]}
shubhamgantayat/chemical-ner-bert-large-uncased-5
null
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-large-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:13:43+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-google-bert/bert-large-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
chemical-ner-bert-large-uncased-5 ================================= This model is a fine-tuned version of google-bert/bert-large-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0906 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-google-bert/bert-large-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuning-code_llama_lib_4 This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 15 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 60 - num_epochs: 11 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7737 | 1.0 | 63 | 0.6034 | | 0.4194 | 2.0 | 126 | 0.3423 | | 0.2123 | 3.0 | 189 | 0.2168 | | 0.1542 | 4.0 | 252 | 0.1721 | | 0.1343 | 5.0 | 315 | 0.1508 | | 0.1224 | 6.0 | 378 | 0.1395 | | 0.1141 | 7.0 | 441 | 0.1321 | | 0.1075 | 8.0 | 504 | 0.1273 | | 0.1028 | 9.0 | 567 | 0.1243 | | 0.0993 | 10.0 | 630 | 0.1220 | | 0.0973 | 11.0 | 693 | 0.1215 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "fine-tuning-code_llama_lib_4", "results": []}]}
Surabhi-K/fine-tuning-code_llama_lib_4
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-04-17T05:15:53+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us
fine-tuning-code\_llama\_lib\_4 =============================== This model is a fine-tuned version of codellama/CodeLlama-7b-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1215 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 3 * eval\_batch\_size: 3 * seed: 42 * gradient\_accumulation\_steps: 5 * total\_train\_batch\_size: 15 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 60 * num\_epochs: 11 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.36.2 * Pytorch 2.1.2 * Datasets 2.15.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* gradient\\_accumulation\\_steps: 5\n* total\\_train\\_batch\\_size: 15\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 11\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.15.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* gradient\\_accumulation\\_steps: 5\n* total\\_train\\_batch\\_size: 15\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 11\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.15.0\n* Tokenizers 0.15.2" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hi - Sanchit Gandhi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4440 - Wer: 32.5447 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0921 | 2.44 | 1000 | 0.2986 | 35.1012 | | 0.0209 | 4.89 | 2000 | 0.3521 | 33.4928 | | 0.0013 | 7.33 | 3000 | 0.4198 | 32.6547 | | 0.0005 | 9.78 | 4000 | 0.4440 | 32.5447 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["hi"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Hi - Sanchit Gandhi", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 11.0", "type": "mozilla-foundation/common_voice_11_0", "config": "hi", "split": "None", "args": "config: hi, split: test"}, "metrics": [{"type": "wer", "value": 32.54465419453145, "name": "Wer"}]}]}]}
rmacek/whisper-small-hi
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:17:10+00:00
[]
[ "hi" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper Small Hi - Sanchit Gandhi ================================= This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: * Loss: 0.4440 * Wer: 32.5447 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_shp3_dpo9 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4927 - Rewards/chosen: -8.7213 - Rewards/rejected: -10.8984 - Rewards/accuracies: 0.6200 - Rewards/margins: 2.1771 - Logps/rejected: -266.2351 - Logps/chosen: -246.7773 - Logits/rejected: -0.7193 - Logits/chosen: -0.7036 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0555 | 2.67 | 100 | 2.5620 | -16.4770 | -17.3996 | 0.5300 | 0.9226 | -273.4587 | -255.3948 | -0.7097 | -0.7042 | | 0.0052 | 5.33 | 200 | 3.5005 | -15.7277 | -18.5215 | 0.6300 | 2.7938 | -274.7052 | -254.5623 | -0.7073 | -0.6907 | | 0.0436 | 8.0 | 300 | 2.7714 | -10.1931 | -12.9259 | 0.6100 | 2.7328 | -268.4880 | -248.4127 | -0.9618 | -0.9474 | | 0.0 | 10.67 | 400 | 3.6414 | -11.7426 | -14.1647 | 0.6000 | 2.4221 | -269.8643 | -250.1343 | -0.7324 | -0.7187 | | 0.0 | 13.33 | 500 | 3.4708 | -8.6874 | -10.8958 | 0.6300 | 2.2084 | -266.2322 | -246.7397 | -0.7190 | -0.7028 | | 0.0 | 16.0 | 600 | 3.5011 | -8.7071 | -10.8590 | 0.6300 | 2.1519 | -266.1913 | -246.7616 | -0.7194 | -0.7036 | | 0.0 | 18.67 | 700 | 3.4789 | -8.7021 | -10.8992 | 0.6400 | 2.1971 | -266.2360 | -246.7560 | -0.7192 | -0.7036 | | 0.0 | 21.33 | 800 | 3.5146 | -8.6829 | -10.8473 | 0.6300 | 2.1644 | -266.1783 | -246.7346 | -0.7194 | -0.7034 | | 0.0 | 24.0 | 900 | 3.4567 | -8.6690 | -10.9074 | 0.6300 | 2.2384 | -266.2451 | -246.7193 | -0.7194 | -0.7036 | | 0.0 | 26.67 | 1000 | 3.4927 | -8.7213 | -10.8984 | 0.6200 | 2.1771 | -266.2351 | -246.7773 | -0.7193 | -0.7036 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_shp3_dpo9", "results": []}]}
guoyu-zhang/model_shp3_dpo9
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-17T05:18:27+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
model\_shp3\_dpo9 ================= This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 3.4927 * Rewards/chosen: -8.7213 * Rewards/rejected: -10.8984 * Rewards/accuracies: 0.6200 * Rewards/margins: 2.1771 * Logps/rejected: -266.2351 * Logps/chosen: -246.7773 * Logits/rejected: -0.7193 * Logits/chosen: -0.7036 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 4 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 100 * training\_steps: 1000 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.1 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
# AffineQuant Model Zoo AffineQuant is a novel quantization method that uses an affine transformation matrix to change the distribution of weights and activations, aimed at optimizing the distribution of weight activations and reducing quantization errors. By introducing an affine transformation matrix, AffineQuant can better align the data distribution with the quantization function, thereby reducing quantization errors. The matrix optimization objective is to minimize the mean squared error between pre- and post-quantization feature map, while introducing the Gradual Mask (GM) method to maintain the strictly diagonal dominance of the affine matrix, ensuring the matrix's invertibility and stable convergence. Experimental results show that AffineQuant performs better than existing quantization methods, such as OmniQuant and SmoothQuant, achieving consistent performance improvements across different quantization configurations and datasets. Code: [https://github.com/bytedance/AffineQuant](https://github.com/bytedance/AffineQuant) Paper: [https://arxiv.org/abs/2403.12544](https://arxiv.org/abs/2403.12544) ## How to use This repository contains models with various quantization configurations. The types of models include: OPT, LLaMA1&2. ### Fake Quantization Accuracy To reproduce the accuracy reported in the paper, we need to use the ```--model``` parameter to load the fake-quantized model. At the same time, we need to specify the bit parameter as 16 to skip the quantization step. For example: ``` CUDA_VISIBLE_DEVICES=0 python main.py \ --model /path/to/llama-13b-w2a16g128 --eval_ppl \ --output_dir ./log/llama-13b-w2a16g128 \ --wbits 16 --abits 16 ``` It is worth noting that if your quantization model is trained using the ```--let``` parameter, you need to enable the bias in the layernorm layers and specific linear layers within the transformer repository to load the shift parameters. For instance, for the llama model, we make the following modifications in ```modeling_llama.py```: 1. Set the bias of the q,k,v,o,up,gate linear layer to True. ``` self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=True) self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=True) self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=True) self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=True) self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=True) self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=True) ``` 2. Enable the bias in RMSNorm. We directly replace the original RMSNorm with ```AffineLlamaRMSNorm``` from AffineQuant. ## Inference Overhead To reproduce the accuracy described in the paper, our weight-only quantization configuration imposes no restrictions on the affine matrices after layernorm. For the weight-activation configuration, such as 4/4 bits, we only update the diagonal elements of the affine matrices after layernorm. Therefore, the model inference with merged parameters incurs no additional overhead. ## Benchmarks We evaluate the quantization performance of LLaMA-7B, 13B, 30B on six zero-shot datasets using 4/4 bit quantization in the following table. | | PIQA($\uparrow$) | ARC-e($\uparrow$) | WinoGrande($\uparrow$) | BoolQ($\uparrow$) | ARC-c($\uparrow$) | HellaSwag($\uparrow$) | Avg.($\uparrow$) | | ---------------------- | ---------------- | ----------------- | ---------------------- | ----------------- | ----------------- | --------------------- | ---------------- | | LLaMA-7B, OmniQuant | 66.15 | 45.20 | 53.43 | 63.51 | 31.14 | 56.44 | 52.65 | | LLaMA-7B, AffineQuant | 69.37 | 42.55 | 55.33 | 63.73 | 31.91 | 57.65 | 53.42 | | LLaMA-13B, OmniQuant | 69.69 | 47.39 | 55.80 | 62.84 | 33.10 | 58.96 | 54.37 | | LLaMA-13B, AffineQuant | 66.32 | 43.90 | 54.70 | 64.10 | 29.61 | 56.88 | 52.58 | | LLaMA-30B, OmniQuant | 71.21 | 49.45 | 59.19 | 65.33 | 34.47 | 64.65 | 56.63 | | LLaMA-30B, AffineQuant | 70.84 | 49.41 | 58.64 | 70.12 | 37.12 | 65.53 | 58.61 | Meanwhile, we compare the 4/4 bit quantization performance of LLaMA1&2 models on WikiText2 and C4 datasets in the following table. | | Methods | WikiText2 | C4 | | ---------- | ----------- | --------- | ----- | | LLaMA-7B | OmniQuant | 11.26 | 14.51 | | | AffineQuant | 10.28 | 13.64 | | LLaMA-13B | OmniQuant | 10.87 | 13.78 | | | AffineQuant | 10.32 | 13.44 | | LLaMA-30B | OmniQuant | 10.33 | 12.49 | | | AffineQuant | 9.35 | 11.58 | | LLaMA2-7B | OmniQuant | 14.26 | 18.02 | | | AffineQuant | 12.69 | 15.76 | | LLaMA2-13B | OmniQuant | 12.30 | 14.55 | | | AffineQuant | 11.45 | 13.97 | ## Related Project [SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models](https://github.com/mit-han-lab/smoothquant) [AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration](https://github.com/mit-han-lab/llm-awq) [GPTQ: Accurate Post-training Compression for Generative Pretrained Transformers](https://github.com/IST-DASLab/gptq) [RPTQ: Reorder-Based Post-Training Quantization for Large Language Models](https://github.com/hahnyuan/RPTQ4LLM) [OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models](https://github.com/OpenGVLab/OmniQuant) [MLC LLM](https://github.com/mlc-ai/mlc-llm) [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) ## Citation ```latex @inproceedings{ ma2024affinequant, title={AffineQuant: Affine Transformation Quantization for Large Language Models}, author={Yuexiao Ma and Huixia Li and Xiawu Zheng and Feng Ling and Xuefeng Xiao and Rui Wang and Shilei Wen and Fei Chao and Rongrong Ji}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=of2rhALq8l} } ```
{"language": ["en"], "license": "apache-2.0"}
ByteDance/AffineQuant
null
[ "safetensors", "en", "arxiv:2403.12544", "license:apache-2.0", "region:us" ]
null
2024-04-17T05:19:00+00:00
[ "2403.12544" ]
[ "en" ]
TAGS #safetensors #en #arxiv-2403.12544 #license-apache-2.0 #region-us
AffineQuant Model Zoo ===================== AffineQuant is a novel quantization method that uses an affine transformation matrix to change the distribution of weights and activations, aimed at optimizing the distribution of weight activations and reducing quantization errors. By introducing an affine transformation matrix, AffineQuant can better align the data distribution with the quantization function, thereby reducing quantization errors. The matrix optimization objective is to minimize the mean squared error between pre- and post-quantization feature map, while introducing the Gradual Mask (GM) method to maintain the strictly diagonal dominance of the affine matrix, ensuring the matrix's invertibility and stable convergence. Experimental results show that AffineQuant performs better than existing quantization methods, such as OmniQuant and SmoothQuant, achieving consistent performance improvements across different quantization configurations and datasets. Code: URL Paper: URL How to use ---------- This repository contains models with various quantization configurations. The types of models include: OPT, LLaMA1&2. ### Fake Quantization Accuracy To reproduce the accuracy reported in the paper, we need to use the parameter to load the fake-quantized model. At the same time, we need to specify the bit parameter as 16 to skip the quantization step. For example: It is worth noting that if your quantization model is trained using the parameter, you need to enable the bias in the layernorm layers and specific linear layers within the transformer repository to load the shift parameters. For instance, for the llama model, we make the following modifications in : 1. Set the bias of the q,k,v,o,up,gate linear layer to True. 2. Enable the bias in RMSNorm. We directly replace the original RMSNorm with from AffineQuant. Inference Overhead ------------------ To reproduce the accuracy described in the paper, our weight-only quantization configuration imposes no restrictions on the affine matrices after layernorm. For the weight-activation configuration, such as 4/4 bits, we only update the diagonal elements of the affine matrices after layernorm. Therefore, the model inference with merged parameters incurs no additional overhead. Benchmarks ---------- We evaluate the quantization performance of LLaMA-7B, 13B, 30B on six zero-shot datasets using 4/4 bit quantization in the following table. Meanwhile, we compare the 4/4 bit quantization performance of LLaMA1&2 models on WikiText2 and C4 datasets in the following table. Related Project --------------- SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration GPTQ: Accurate Post-training Compression for Generative Pretrained Transformers RPTQ: Reorder-Based Post-Training Quantization for Large Language Models OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models MLC LLM AutoGPTQ
[ "### Fake Quantization Accuracy\n\n\nTo reproduce the accuracy reported in the paper, we need to use the parameter to load the fake-quantized model. At the same time, we need to specify the bit parameter as 16 to skip the quantization step. For example:\n\n\nIt is worth noting that if your quantization model is trained using the parameter, you need to enable the bias in the layernorm layers and specific linear layers within the transformer repository to load the shift parameters. For instance, for the llama model, we make the following modifications in :\n\n\n1. Set the bias of the q,k,v,o,up,gate linear layer to True.\n2. Enable the bias in RMSNorm. We directly replace the original RMSNorm with from AffineQuant.\n\n\nInference Overhead\n------------------\n\n\nTo reproduce the accuracy described in the paper, our weight-only quantization configuration imposes no restrictions on the affine matrices after layernorm. For the weight-activation configuration, such as 4/4 bits, we only update the diagonal elements of the affine matrices after layernorm. Therefore, the model inference with merged parameters incurs no additional overhead.\n\n\nBenchmarks\n----------\n\n\nWe evaluate the quantization performance of LLaMA-7B, 13B, 30B on six zero-shot datasets using 4/4 bit quantization in the following table.\n\n\n\nMeanwhile, we compare the 4/4 bit quantization performance of LLaMA1&2 models on WikiText2 and C4 datasets in the following table.\n\n\n\nRelated Project\n---------------\n\n\nSmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models\n\n\nAWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration\n\n\nGPTQ: Accurate Post-training Compression for Generative Pretrained Transformers\n\n\nRPTQ: Reorder-Based Post-Training Quantization for Large Language Models\n\n\nOmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models\n\n\nMLC LLM\n\n\nAutoGPTQ" ]
[ "TAGS\n#safetensors #en #arxiv-2403.12544 #license-apache-2.0 #region-us \n", "### Fake Quantization Accuracy\n\n\nTo reproduce the accuracy reported in the paper, we need to use the parameter to load the fake-quantized model. At the same time, we need to specify the bit parameter as 16 to skip the quantization step. For example:\n\n\nIt is worth noting that if your quantization model is trained using the parameter, you need to enable the bias in the layernorm layers and specific linear layers within the transformer repository to load the shift parameters. For instance, for the llama model, we make the following modifications in :\n\n\n1. Set the bias of the q,k,v,o,up,gate linear layer to True.\n2. Enable the bias in RMSNorm. We directly replace the original RMSNorm with from AffineQuant.\n\n\nInference Overhead\n------------------\n\n\nTo reproduce the accuracy described in the paper, our weight-only quantization configuration imposes no restrictions on the affine matrices after layernorm. For the weight-activation configuration, such as 4/4 bits, we only update the diagonal elements of the affine matrices after layernorm. Therefore, the model inference with merged parameters incurs no additional overhead.\n\n\nBenchmarks\n----------\n\n\nWe evaluate the quantization performance of LLaMA-7B, 13B, 30B on six zero-shot datasets using 4/4 bit quantization in the following table.\n\n\n\nMeanwhile, we compare the 4/4 bit quantization performance of LLaMA1&2 models on WikiText2 and C4 datasets in the following table.\n\n\n\nRelated Project\n---------------\n\n\nSmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models\n\n\nAWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration\n\n\nGPTQ: Accurate Post-training Compression for Generative Pretrained Transformers\n\n\nRPTQ: Reorder-Based Post-Training Quantization for Large Language Models\n\n\nOmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models\n\n\nMLC LLM\n\n\nAutoGPTQ" ]
text-generation
transformers
# ArabianGPT Model Overview ## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation <p style="color: red;">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.8B, and users engage with and apply the model's outputs at their own risk.</p> > **Important Note:** Currently, we offer a raw pre-trained model. Our team is actively working on releasing instruction-based LLMs that are fine-tuned and augmented with LRHF. The first set of pre-trained models has been made available for community exploration. While we do have models fine-tuned for specific tasks such as summarization and sentiment analysis, they are still in the development phase. ## How you can use this Pre-Trained? You are invited to utilize this pre-trained, native Arabic language model as an experimental tool to assess its capabilities, aid in its fine-tuning, and evaluate its performance across a variety of downstream tasks. We encourage you to review our technical report for a comprehensive understanding of the model's performance metrics and the specific downstream tasks it has been tested on. This will provide valuable insights into its applicability and effectiveness in diverse applications. ## Introduction ArabianGPT-0.8B, part of the ArabianLLM initiatives, is a specialized GPT model optimized for the Arabic language. Developed at Prince Sultan University's Robotics and Internet of Things Lab, this model is a leap forward in natural language modeling and generation for Arabic, tackling the language's unique challenges. ## Key Features - **Architecture**: GPT-2 - **Model Size**: 0.8 billion parameters - **Layers**: 36 - **Model Attention Layers (MAL)**: 20 - **Context Window Size**: 1024 tokens ## Training - **Dataset**: Scraped texts contains scientific articles, and general texts - **Data Size**: 117 GB - **Tokenizer**: Aranizer 64K - **Tokens**: Over 14 billion - **Hardware**: 5 NDIVIA A100 GPUs - **Performance**: loss of 3.6 ## Role in ArabianLLM Initiatives ArabianGPT-0.8B is crucial for advancing Arabic language processing, addressing challenges unique to Arabic morphology and dialects. ## Usage Suitable for Arabic text generation tasks. Example usage with Transformers Pipeline: ```python from transformers import pipeline pipe = pipeline("text-generation", model="riotu-lab/ArabianGPT-08B", max_new_tokens=1024) text = '' pipe(text) ``` ## Limitations and Ethical Considerations - The model may have context understanding or text generation limitations in certain scenarios. - Emphasis on ethical use to prevent misinformation or harmful content propagation. ## Acknowledgments Special thanks to Prince Sultan University, particularly the Robotics and Internet of Things Lab. ## Contact Information For inquiries: [[email protected]](mailto:[email protected]). ## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation <p style="color: red;">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.3B, and users engage with and apply the model's outputs at their own risk.</p>
{"language": ["ar"], "license": "apache-2.0", "tags": ["ArabianGPT"], "widget": [{"text": "\u0623\u0639\u0644\u0646\u062a \u0648\u0632\u0627\u0631\u0629 \u0627\u0644\u062d\u062c \u0641\u064a \u0627\u0644\u0645\u0645\u0644\u0643\u0629 \u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0633\u0639\u0648\u062f\u064a\u0629", "example_title": "\u0645\u062b\u0627\u0644 \u0661"}, {"text": "\u064a\u0628\u062f\u0648 \u0627\u0644\u064a\u0648\u0645 \u062c\u0645\u064a\u0644\u0627\u060c \u0633\u0623\u0642\u0648\u0645 \u0628\u062a\u062d\u0636\u064a\u0631", "example_title": "\u0645\u062b\u0627\u0644 \u0662"}, {"text": "\u0625\u0646 \u0627\u0644\u062a\u0642\u0646\u064a\u0627\u062a \u0627\u0644\u062d\u062f\u064a\u062b\u0629", "example_title": "\u0645\u062b\u0627\u0644 \u0663"}]}
riotu-lab/ArabianGPT-08B-V2
null
[ "transformers", "safetensors", "gpt2", "text-generation", "ArabianGPT", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-17T05:20:31+00:00
[]
[ "ar" ]
TAGS #transformers #safetensors #gpt2 #text-generation #ArabianGPT #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# ArabianGPT Model Overview ## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation <p style="color: red;">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.8B, and users engage with and apply the model's outputs at their own risk.</p> > Important Note: Currently, we offer a raw pre-trained model. Our team is actively working on releasing instruction-based LLMs that are fine-tuned and augmented with LRHF. The first set of pre-trained models has been made available for community exploration. While we do have models fine-tuned for specific tasks such as summarization and sentiment analysis, they are still in the development phase. ## How you can use this Pre-Trained? You are invited to utilize this pre-trained, native Arabic language model as an experimental tool to assess its capabilities, aid in its fine-tuning, and evaluate its performance across a variety of downstream tasks. We encourage you to review our technical report for a comprehensive understanding of the model's performance metrics and the specific downstream tasks it has been tested on. This will provide valuable insights into its applicability and effectiveness in diverse applications. ## Introduction ArabianGPT-0.8B, part of the ArabianLLM initiatives, is a specialized GPT model optimized for the Arabic language. Developed at Prince Sultan University's Robotics and Internet of Things Lab, this model is a leap forward in natural language modeling and generation for Arabic, tackling the language's unique challenges. ## Key Features - Architecture: GPT-2 - Model Size: 0.8 billion parameters - Layers: 36 - Model Attention Layers (MAL): 20 - Context Window Size: 1024 tokens ## Training - Dataset: Scraped texts contains scientific articles, and general texts - Data Size: 117 GB - Tokenizer: Aranizer 64K - Tokens: Over 14 billion - Hardware: 5 NDIVIA A100 GPUs - Performance: loss of 3.6 ## Role in ArabianLLM Initiatives ArabianGPT-0.8B is crucial for advancing Arabic language processing, addressing challenges unique to Arabic morphology and dialects. ## Usage Suitable for Arabic text generation tasks. Example usage with Transformers Pipeline: ## Limitations and Ethical Considerations - The model may have context understanding or text generation limitations in certain scenarios. - Emphasis on ethical use to prevent misinformation or harmful content propagation. ## Acknowledgments Special thanks to Prince Sultan University, particularly the Robotics and Internet of Things Lab. ## Contact Information For inquiries: riotu@URL. ## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation <p style="color: red;">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.3B, and users engage with and apply the model's outputs at their own risk.</p>
[ "# ArabianGPT Model Overview", "## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation\n\n<p style=\"color: red;\">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.8B, and users engage with and apply the model's outputs at their own risk.</p>\n\n> Important Note: Currently, we offer a raw pre-trained model. Our team is actively working on releasing instruction-based LLMs that are fine-tuned and augmented with LRHF. The first set of pre-trained models has been made available for community exploration. While we do have models fine-tuned for specific tasks such as summarization and sentiment analysis, they are still in the development phase.", "## How you can use this Pre-Trained?\nYou are invited to utilize this pre-trained, native Arabic language model as an experimental tool to assess its capabilities, aid in its fine-tuning, and evaluate its performance across a variety of downstream tasks. We encourage you to review our technical report for a comprehensive understanding of the model's performance metrics and the specific downstream tasks it has been tested on. This will provide valuable insights into its applicability and effectiveness in diverse applications.", "## Introduction\nArabianGPT-0.8B, part of the ArabianLLM initiatives, is a specialized GPT model optimized for the Arabic language. Developed at Prince Sultan University's Robotics and Internet of Things Lab, this model is a leap forward in natural language modeling and generation for Arabic, tackling the language's unique challenges.", "## Key Features\n- Architecture: GPT-2\n- Model Size: 0.8 billion parameters\n- Layers: 36\n- Model Attention Layers (MAL): 20\n- Context Window Size: 1024 tokens", "## Training\n- Dataset: Scraped texts contains scientific articles, and general texts\n- Data Size: 117 GB\n- Tokenizer: Aranizer 64K\n- Tokens: Over 14 billion\n- Hardware: 5 NDIVIA A100 GPUs \n- Performance: loss of 3.6", "## Role in ArabianLLM Initiatives\nArabianGPT-0.8B is crucial for advancing Arabic language processing, addressing challenges unique to Arabic morphology and dialects.", "## Usage\nSuitable for Arabic text generation tasks. Example usage with Transformers Pipeline:", "## Limitations and Ethical Considerations\n\n- The model may have context understanding or text generation limitations in certain scenarios.\n- Emphasis on ethical use to prevent misinformation or harmful content propagation.", "## Acknowledgments\n\nSpecial thanks to Prince Sultan University, particularly the Robotics and Internet of Things Lab.", "## Contact Information\n\nFor inquiries: riotu@URL.", "## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation\n\n<p style=\"color: red;\">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.3B, and users engage with and apply the model's outputs at their own risk.</p>" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #ArabianGPT #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# ArabianGPT Model Overview", "## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation\n\n<p style=\"color: red;\">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.8B, and users engage with and apply the model's outputs at their own risk.</p>\n\n> Important Note: Currently, we offer a raw pre-trained model. Our team is actively working on releasing instruction-based LLMs that are fine-tuned and augmented with LRHF. The first set of pre-trained models has been made available for community exploration. While we do have models fine-tuned for specific tasks such as summarization and sentiment analysis, they are still in the development phase.", "## How you can use this Pre-Trained?\nYou are invited to utilize this pre-trained, native Arabic language model as an experimental tool to assess its capabilities, aid in its fine-tuning, and evaluate its performance across a variety of downstream tasks. We encourage you to review our technical report for a comprehensive understanding of the model's performance metrics and the specific downstream tasks it has been tested on. This will provide valuable insights into its applicability and effectiveness in diverse applications.", "## Introduction\nArabianGPT-0.8B, part of the ArabianLLM initiatives, is a specialized GPT model optimized for the Arabic language. Developed at Prince Sultan University's Robotics and Internet of Things Lab, this model is a leap forward in natural language modeling and generation for Arabic, tackling the language's unique challenges.", "## Key Features\n- Architecture: GPT-2\n- Model Size: 0.8 billion parameters\n- Layers: 36\n- Model Attention Layers (MAL): 20\n- Context Window Size: 1024 tokens", "## Training\n- Dataset: Scraped texts contains scientific articles, and general texts\n- Data Size: 117 GB\n- Tokenizer: Aranizer 64K\n- Tokens: Over 14 billion\n- Hardware: 5 NDIVIA A100 GPUs \n- Performance: loss of 3.6", "## Role in ArabianLLM Initiatives\nArabianGPT-0.8B is crucial for advancing Arabic language processing, addressing challenges unique to Arabic morphology and dialects.", "## Usage\nSuitable for Arabic text generation tasks. Example usage with Transformers Pipeline:", "## Limitations and Ethical Considerations\n\n- The model may have context understanding or text generation limitations in certain scenarios.\n- Emphasis on ethical use to prevent misinformation or harmful content propagation.", "## Acknowledgments\n\nSpecial thanks to Prince Sultan University, particularly the Robotics and Internet of Things Lab.", "## Contact Information\n\nFor inquiries: riotu@URL.", "## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation\n\n<p style=\"color: red;\">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.3B, and users engage with and apply the model's outputs at their own risk.</p>" ]
null
null
# ID-Pose: Sparse-view Camera Pose Estimation by Inverting Diffusion Models - ArXiv: https://arxiv.org/abs/2306.17140 - Code: https://github.com/xt4d/id-pose/ - Demo: https://huggingface.co/spaces/tokenid/ID-Pose/ ## Abstract - ID-Pose estimates camera poses of sparse-view images of a 3D object (appearance overlaps not required). - ID-Pose inversely uses the off-the-shelf Zero123 to estimate camera poses by iteratively minimizing denoising errors given input images. - ID-Pose is a zero-shot method that requires NO additional model training or finetuning. - ID-Pose exhibits strong generalization ability on open-world images as the method effectively leverages the image priors from Zero123 (StableDiffusion).
{"license": "mit"}
tokenid/ID-Pose
null
[ "arxiv:2306.17140", "license:mit", "has_space", "region:us" ]
null
2024-04-17T05:21:01+00:00
[ "2306.17140" ]
[]
TAGS #arxiv-2306.17140 #license-mit #has_space #region-us
# ID-Pose: Sparse-view Camera Pose Estimation by Inverting Diffusion Models - ArXiv: URL - Code: URL - Demo: URL ## Abstract - ID-Pose estimates camera poses of sparse-view images of a 3D object (appearance overlaps not required). - ID-Pose inversely uses the off-the-shelf Zero123 to estimate camera poses by iteratively minimizing denoising errors given input images. - ID-Pose is a zero-shot method that requires NO additional model training or finetuning. - ID-Pose exhibits strong generalization ability on open-world images as the method effectively leverages the image priors from Zero123 (StableDiffusion).
[ "# ID-Pose: Sparse-view Camera Pose Estimation by Inverting Diffusion Models\n\n- ArXiv: URL\n- Code: URL\n- Demo: URL", "## Abstract\n- ID-Pose estimates camera poses of sparse-view images of a 3D object (appearance overlaps not required).\n- ID-Pose inversely uses the off-the-shelf Zero123 to estimate camera poses by iteratively minimizing denoising errors given input images.\n- ID-Pose is a zero-shot method that requires NO additional model training or finetuning.\n- ID-Pose exhibits strong generalization ability on open-world images as the method effectively leverages the image priors from Zero123 (StableDiffusion)." ]
[ "TAGS\n#arxiv-2306.17140 #license-mit #has_space #region-us \n", "# ID-Pose: Sparse-view Camera Pose Estimation by Inverting Diffusion Models\n\n- ArXiv: URL\n- Code: URL\n- Demo: URL", "## Abstract\n- ID-Pose estimates camera poses of sparse-view images of a 3D object (appearance overlaps not required).\n- ID-Pose inversely uses the off-the-shelf Zero123 to estimate camera poses by iteratively minimizing denoising errors given input images.\n- ID-Pose is a zero-shot method that requires NO additional model training or finetuning.\n- ID-Pose exhibits strong generalization ability on open-world images as the method effectively leverages the image priors from Zero123 (StableDiffusion)." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.6364 - F1 Score: 0.6850 - Accuracy: 0.685 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6523 | 20.0 | 200 | 0.6602 | 0.6107 | 0.615 | | 0.5889 | 40.0 | 400 | 0.6631 | 0.6142 | 0.617 | | 0.5457 | 60.0 | 600 | 0.6771 | 0.6209 | 0.621 | | 0.5153 | 80.0 | 800 | 0.6813 | 0.6135 | 0.614 | | 0.4954 | 100.0 | 1000 | 0.6920 | 0.6136 | 0.614 | | 0.4847 | 120.0 | 1200 | 0.6981 | 0.6200 | 0.62 | | 0.4753 | 140.0 | 1400 | 0.6823 | 0.6309 | 0.631 | | 0.4698 | 160.0 | 1600 | 0.7015 | 0.6447 | 0.645 | | 0.4634 | 180.0 | 1800 | 0.6763 | 0.6356 | 0.636 | | 0.4559 | 200.0 | 2000 | 0.6808 | 0.6389 | 0.639 | | 0.4485 | 220.0 | 2200 | 0.7041 | 0.6408 | 0.641 | | 0.4435 | 240.0 | 2400 | 0.6797 | 0.6558 | 0.656 | | 0.4352 | 260.0 | 2600 | 0.7195 | 0.6430 | 0.643 | | 0.4283 | 280.0 | 2800 | 0.7155 | 0.65 | 0.65 | | 0.42 | 300.0 | 3000 | 0.7098 | 0.6516 | 0.653 | | 0.4135 | 320.0 | 3200 | 0.7060 | 0.6532 | 0.655 | | 0.4048 | 340.0 | 3400 | 0.7106 | 0.6423 | 0.644 | | 0.3943 | 360.0 | 3600 | 0.7462 | 0.6417 | 0.642 | | 0.3849 | 380.0 | 3800 | 0.7403 | 0.6528 | 0.653 | | 0.3768 | 400.0 | 4000 | 0.7351 | 0.6432 | 0.645 | | 0.3665 | 420.0 | 4200 | 0.7459 | 0.6371 | 0.638 | | 0.3585 | 440.0 | 4400 | 0.7503 | 0.6372 | 0.64 | | 0.3501 | 460.0 | 4600 | 0.7474 | 0.6424 | 0.643 | | 0.3425 | 480.0 | 4800 | 0.7972 | 0.6375 | 0.638 | | 0.3354 | 500.0 | 5000 | 0.7901 | 0.6448 | 0.645 | | 0.3266 | 520.0 | 5200 | 0.8136 | 0.6310 | 0.631 | | 0.32 | 540.0 | 5400 | 0.7967 | 0.6369 | 0.637 | | 0.3145 | 560.0 | 5600 | 0.7992 | 0.6369 | 0.637 | | 0.3082 | 580.0 | 5800 | 0.8255 | 0.6330 | 0.633 | | 0.3038 | 600.0 | 6000 | 0.8006 | 0.6268 | 0.627 | | 0.2966 | 620.0 | 6200 | 0.8352 | 0.6329 | 0.633 | | 0.2906 | 640.0 | 6400 | 0.8417 | 0.6247 | 0.625 | | 0.2872 | 660.0 | 6600 | 0.8195 | 0.6369 | 0.637 | | 0.2801 | 680.0 | 6800 | 0.8518 | 0.6330 | 0.633 | | 0.2764 | 700.0 | 7000 | 0.8594 | 0.638 | 0.638 | | 0.2728 | 720.0 | 7200 | 0.8553 | 0.632 | 0.632 | | 0.2662 | 740.0 | 7400 | 0.8691 | 0.6319 | 0.632 | | 0.2665 | 760.0 | 7600 | 0.8889 | 0.6310 | 0.631 | | 0.2623 | 780.0 | 7800 | 0.8657 | 0.63 | 0.63 | | 0.2598 | 800.0 | 8000 | 0.8847 | 0.6280 | 0.628 | | 0.2553 | 820.0 | 8200 | 0.8976 | 0.6270 | 0.627 | | 0.2528 | 840.0 | 8400 | 0.8937 | 0.6320 | 0.632 | | 0.2509 | 860.0 | 8600 | 0.8924 | 0.6370 | 0.637 | | 0.248 | 880.0 | 8800 | 0.9017 | 0.6249 | 0.625 | | 0.2473 | 900.0 | 9000 | 0.8995 | 0.6330 | 0.633 | | 0.2459 | 920.0 | 9200 | 0.9111 | 0.6260 | 0.626 | | 0.2453 | 940.0 | 9400 | 0.9009 | 0.6209 | 0.621 | | 0.2441 | 960.0 | 9600 | 0.9082 | 0.6270 | 0.627 | | 0.2433 | 980.0 | 9800 | 0.9084 | 0.6249 | 0.625 | | 0.243 | 1000.0 | 10000 | 0.9080 | 0.6250 | 0.625 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_2-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T05:23:44+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_tf\_2-seqsight\_65536\_512\_47M-L32\_all ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.6364 * F1 Score: 0.6850 * Accuracy: 0.685 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_65536_512_47M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 0.8911 - F1 Score: 0.6614 - Accuracy: 0.6627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 2.1502 | 5.56 | 200 | 1.9532 | 0.2539 | 0.2794 | | 1.7911 | 11.11 | 400 | 1.5566 | 0.4198 | 0.4216 | | 1.577 | 16.67 | 600 | 1.3911 | 0.4866 | 0.4820 | | 1.4645 | 22.22 | 800 | 1.3114 | 0.5226 | 0.5159 | | 1.3956 | 27.78 | 1000 | 1.2558 | 0.5337 | 0.5296 | | 1.3406 | 33.33 | 1200 | 1.2095 | 0.5505 | 0.5470 | | 1.3009 | 38.89 | 1400 | 1.1875 | 0.5572 | 0.5532 | | 1.27 | 44.44 | 1600 | 1.1603 | 0.5650 | 0.5633 | | 1.2405 | 50.0 | 1800 | 1.1324 | 0.5758 | 0.5770 | | 1.2126 | 55.56 | 2000 | 1.1088 | 0.5902 | 0.5898 | | 1.1864 | 61.11 | 2200 | 1.0813 | 0.5981 | 0.5990 | | 1.1546 | 66.67 | 2400 | 1.0571 | 0.6077 | 0.6058 | | 1.1303 | 72.22 | 2600 | 1.0436 | 0.6140 | 0.6124 | | 1.1097 | 77.78 | 2800 | 1.0236 | 0.6233 | 0.6218 | | 1.0861 | 83.33 | 3000 | 1.0071 | 0.6278 | 0.6251 | | 1.0663 | 88.89 | 3200 | 0.9930 | 0.6346 | 0.6321 | | 1.0468 | 94.44 | 3400 | 0.9837 | 0.6383 | 0.6353 | | 1.0312 | 100.0 | 3600 | 0.9733 | 0.6379 | 0.6363 | | 1.014 | 105.56 | 3800 | 0.9593 | 0.6442 | 0.6408 | | 1.0005 | 111.11 | 4000 | 0.9502 | 0.6450 | 0.6444 | | 0.9896 | 116.67 | 4200 | 0.9446 | 0.6461 | 0.6442 | | 0.9794 | 122.22 | 4400 | 0.9395 | 0.6483 | 0.6471 | | 0.9704 | 127.78 | 4600 | 0.9294 | 0.6521 | 0.6506 | | 0.9614 | 133.33 | 4800 | 0.9301 | 0.6520 | 0.6514 | | 0.9544 | 138.89 | 5000 | 0.9255 | 0.6534 | 0.6520 | | 0.9478 | 144.44 | 5200 | 0.9251 | 0.6539 | 0.6537 | | 0.9407 | 150.0 | 5400 | 0.9191 | 0.6544 | 0.6532 | | 0.9353 | 155.56 | 5600 | 0.9162 | 0.6570 | 0.6559 | | 0.9304 | 161.11 | 5800 | 0.9141 | 0.6588 | 0.6575 | | 0.9254 | 166.67 | 6000 | 0.9104 | 0.6605 | 0.6597 | | 0.9214 | 172.22 | 6200 | 0.9093 | 0.6612 | 0.6600 | | 0.9178 | 177.78 | 6400 | 0.9099 | 0.6612 | 0.6606 | | 0.9108 | 183.33 | 6600 | 0.9074 | 0.6610 | 0.6604 | | 0.9092 | 188.89 | 6800 | 0.9057 | 0.6650 | 0.6644 | | 0.9037 | 194.44 | 7000 | 0.9055 | 0.6633 | 0.6628 | | 0.9021 | 200.0 | 7200 | 0.9023 | 0.6660 | 0.6653 | | 0.8966 | 205.56 | 7400 | 0.8984 | 0.6672 | 0.6666 | | 0.8946 | 211.11 | 7600 | 0.8970 | 0.6630 | 0.6634 | | 0.8907 | 216.67 | 7800 | 0.8968 | 0.6666 | 0.6665 | | 0.8878 | 222.22 | 8000 | 0.8948 | 0.6670 | 0.6666 | | 0.8846 | 227.78 | 8200 | 0.8934 | 0.6652 | 0.6650 | | 0.882 | 233.33 | 8400 | 0.8934 | 0.6676 | 0.6677 | | 0.8814 | 238.89 | 8600 | 0.8919 | 0.6666 | 0.6665 | | 0.8799 | 244.44 | 8800 | 0.8908 | 0.6679 | 0.6677 | | 0.8765 | 250.0 | 9000 | 0.8911 | 0.6670 | 0.6672 | | 0.8765 | 255.56 | 9200 | 0.8907 | 0.6664 | 0.6667 | | 0.875 | 261.11 | 9400 | 0.8906 | 0.6675 | 0.6675 | | 0.8743 | 266.67 | 9600 | 0.8909 | 0.6680 | 0.6679 | | 0.8736 | 272.22 | 9800 | 0.8907 | 0.6671 | 0.6671 | | 0.8731 | 277.78 | 10000 | 0.8906 | 0.6676 | 0.6676 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_virus_covid-seqsight_65536_512_47M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_65536_512_47M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-04-17T05:24:38+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
GUE\_virus\_covid-seqsight\_65536\_512\_47M-L32\_all ==================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset. It achieves the following results on the evaluation set: * Loss: 0.8911 * F1 Score: 0.6614 * Accuracy: 0.6627 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/eldogbbhed/Wiz2Beagle-7b-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["merge", "mergekit", "vortexmergekit", "amazingvince/Not-WizardLM-2-7B", "mlabonne/NeuralBeagle14-7B"], "base_model": "eldogbbhed/Wiz2Beagle-7b-v1", "quantized_by": "mradermacher"}
mradermacher/Wiz2Beagle-7b-v1-GGUF
null
[ "transformers", "gguf", "merge", "mergekit", "vortexmergekit", "amazingvince/Not-WizardLM-2-7B", "mlabonne/NeuralBeagle14-7B", "en", "base_model:eldogbbhed/Wiz2Beagle-7b-v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:25:21+00:00
[]
[ "en" ]
TAGS #transformers #gguf #merge #mergekit #vortexmergekit #amazingvince/Not-WizardLM-2-7B #mlabonne/NeuralBeagle14-7B #en #base_model-eldogbbhed/Wiz2Beagle-7b-v1 #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #merge #mergekit #vortexmergekit #amazingvince/Not-WizardLM-2-7B #mlabonne/NeuralBeagle14-7B #en #base_model-eldogbbhed/Wiz2Beagle-7b-v1 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # myproject This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 10.7401 - Accuracy: 0.3585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 70 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 60 | 6.8412 | 0.3585 | | No log | 2.0 | 120 | 7.0381 | 0.3585 | | No log | 3.0 | 180 | 7.2302 | 0.3585 | | No log | 4.0 | 240 | 7.3315 | 0.3585 | | No log | 5.0 | 300 | 7.5093 | 0.3585 | | No log | 6.0 | 360 | 7.6537 | 0.3585 | | No log | 7.0 | 420 | 7.7774 | 0.3585 | | No log | 8.0 | 480 | 7.8459 | 0.3585 | | 2.4126 | 9.0 | 540 | 7.9683 | 0.3585 | | 2.4126 | 10.0 | 600 | 8.0727 | 0.3585 | | 2.4126 | 11.0 | 660 | 8.1432 | 0.3585 | | 2.4126 | 12.0 | 720 | 8.2632 | 0.3585 | | 2.4126 | 13.0 | 780 | 8.3617 | 0.3585 | | 2.4126 | 14.0 | 840 | 8.3888 | 0.3585 | | 2.4126 | 15.0 | 900 | 8.4802 | 0.3585 | | 2.4126 | 16.0 | 960 | 8.6048 | 0.3585 | | 1.3107 | 17.0 | 1020 | 8.6832 | 0.3585 | | 1.3107 | 18.0 | 1080 | 8.7031 | 0.3585 | | 1.3107 | 19.0 | 1140 | 8.8062 | 0.3585 | | 1.3107 | 20.0 | 1200 | 8.9172 | 0.3585 | | 1.3107 | 21.0 | 1260 | 8.9063 | 0.3585 | | 1.3107 | 22.0 | 1320 | 9.0786 | 0.3585 | | 1.3107 | 23.0 | 1380 | 9.1961 | 0.3585 | | 1.3107 | 24.0 | 1440 | 9.1986 | 0.3585 | | 0.6626 | 25.0 | 1500 | 9.2499 | 0.3585 | | 0.6626 | 26.0 | 1560 | 9.2925 | 0.3585 | | 0.6626 | 27.0 | 1620 | 9.4094 | 0.3585 | | 0.6626 | 28.0 | 1680 | 9.4204 | 0.3585 | | 0.6626 | 29.0 | 1740 | 9.5304 | 0.3585 | | 0.6626 | 30.0 | 1800 | 9.5006 | 0.3585 | | 0.6626 | 31.0 | 1860 | 9.6281 | 0.3585 | | 0.6626 | 32.0 | 1920 | 9.6695 | 0.3585 | | 0.6626 | 33.0 | 1980 | 9.6979 | 0.3585 | | 0.3334 | 34.0 | 2040 | 9.8019 | 0.3585 | | 0.3334 | 35.0 | 2100 | 9.8644 | 0.3585 | | 0.3334 | 36.0 | 2160 | 9.8489 | 0.3585 | | 0.3334 | 37.0 | 2220 | 9.8635 | 0.3585 | | 0.3334 | 38.0 | 2280 | 9.9720 | 0.3585 | | 0.3334 | 39.0 | 2340 | 10.0142 | 0.3585 | | 0.3334 | 40.0 | 2400 | 10.1065 | 0.3585 | | 0.3334 | 41.0 | 2460 | 10.1095 | 0.3585 | | 0.1755 | 42.0 | 2520 | 10.1453 | 0.3585 | | 0.1755 | 43.0 | 2580 | 10.1601 | 0.3585 | | 0.1755 | 44.0 | 2640 | 10.2449 | 0.3585 | | 0.1755 | 45.0 | 2700 | 10.2644 | 0.3585 | | 0.1755 | 46.0 | 2760 | 10.2791 | 0.3585 | | 0.1755 | 47.0 | 2820 | 10.3368 | 0.3585 | | 0.1755 | 48.0 | 2880 | 10.3840 | 0.3585 | | 0.1755 | 49.0 | 2940 | 10.3765 | 0.3585 | | 0.1048 | 50.0 | 3000 | 10.4580 | 0.3585 | | 0.1048 | 51.0 | 3060 | 10.4575 | 0.3585 | | 0.1048 | 52.0 | 3120 | 10.4835 | 0.3585 | | 0.1048 | 53.0 | 3180 | 10.5889 | 0.3585 | | 0.1048 | 54.0 | 3240 | 10.5074 | 0.3585 | | 0.1048 | 55.0 | 3300 | 10.5430 | 0.3585 | | 0.1048 | 56.0 | 3360 | 10.5594 | 0.3585 | | 0.1048 | 57.0 | 3420 | 10.6237 | 0.3585 | | 0.1048 | 58.0 | 3480 | 10.6025 | 0.3585 | | 0.0744 | 59.0 | 3540 | 10.6312 | 0.3585 | | 0.0744 | 60.0 | 3600 | 10.6667 | 0.3585 | | 0.0744 | 61.0 | 3660 | 10.6999 | 0.3585 | | 0.0744 | 62.0 | 3720 | 10.6992 | 0.3585 | | 0.0744 | 63.0 | 3780 | 10.6985 | 0.3585 | | 0.0744 | 64.0 | 3840 | 10.7162 | 0.3585 | | 0.0744 | 65.0 | 3900 | 10.7121 | 0.3585 | | 0.0744 | 66.0 | 3960 | 10.7050 | 0.3585 | | 0.06 | 67.0 | 4020 | 10.7263 | 0.3585 | | 0.06 | 68.0 | 4080 | 10.7295 | 0.3585 | | 0.06 | 69.0 | 4140 | 10.7384 | 0.3585 | | 0.06 | 70.0 | 4200 | 10.7401 | 0.3585 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.1+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "myproject", "results": []}]}
shobana/myproject
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:26:03+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
myproject ========= This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 10.7401 * Accuracy: 0.3585 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 70 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.1+cu121 * Datasets 2.16.1 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 70", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.1+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 70", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.1+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 1.9825 - F1 Score: 0.6573 - Accuracy: 0.6574 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:| | 0.5881 | 66.67 | 200 | 0.7620 | 0.6185 | 0.6183 | | 0.374 | 133.33 | 400 | 0.9147 | 0.6573 | 0.6574 | | 0.272 | 200.0 | 600 | 1.0506 | 0.6653 | 0.6656 | | 0.2261 | 266.67 | 800 | 1.1062 | 0.6469 | 0.6476 | | 0.2015 | 333.33 | 1000 | 1.1267 | 0.6414 | 0.6411 | | 0.1875 | 400.0 | 1200 | 1.1849 | 0.6363 | 0.6362 | | 0.1758 | 466.67 | 1400 | 1.1932 | 0.6455 | 0.6460 | | 0.1634 | 533.33 | 1600 | 1.2416 | 0.6440 | 0.6444 | | 0.1555 | 600.0 | 1800 | 1.3574 | 0.6488 | 0.6493 | | 0.1475 | 666.67 | 2000 | 1.4170 | 0.6350 | 0.6378 | | 0.1374 | 733.33 | 2200 | 1.4038 | 0.6441 | 0.6444 | | 0.1286 | 800.0 | 2400 | 1.4878 | 0.6465 | 0.6476 | | 0.1229 | 866.67 | 2600 | 1.4307 | 0.6577 | 0.6574 | | 0.1162 | 933.33 | 2800 | 1.5280 | 0.6427 | 0.6427 | | 0.1091 | 1000.0 | 3000 | 1.4177 | 0.6488 | 0.6493 | | 0.1018 | 1066.67 | 3200 | 1.6755 | 0.6524 | 0.6525 | | 0.0973 | 1133.33 | 3400 | 1.5230 | 0.6463 | 0.6460 | | 0.0917 | 1200.0 | 3600 | 1.5559 | 0.6550 | 0.6558 | | 0.0877 | 1266.67 | 3800 | 1.6510 | 0.6602 | 0.6607 | | 0.0819 | 1333.33 | 4000 | 1.6203 | 0.6586 | 0.6591 | | 0.0777 | 1400.0 | 4200 | 1.6706 | 0.6600 | 0.6607 | | 0.0736 | 1466.67 | 4400 | 1.5861 | 0.6652 | 0.6656 | | 0.0698 | 1533.33 | 4600 | 1.6971 | 0.6623 | 0.6623 | | 0.0671 | 1600.0 | 4800 | 1.7818 | 0.6717 | 0.6721 | | 0.0634 | 1666.67 | 5000 | 1.8030 | 0.6590 | 0.6591 | | 0.0615 | 1733.33 | 5200 | 1.7842 | 0.6615 | 0.6623 | | 0.0587 | 1800.0 | 5400 | 1.7741 | 0.6591 | 0.6607 | | 0.0568 | 1866.67 | 5600 | 1.8269 | 0.6577 | 0.6591 | | 0.0551 | 1933.33 | 5800 | 1.8929 | 0.6661 | 0.6672 | | 0.0531 | 2000.0 | 6000 | 1.9567 | 0.6641 | 0.6639 | | 0.0505 | 2066.67 | 6200 | 1.8462 | 0.6526 | 0.6525 | | 0.0494 | 2133.33 | 6400 | 1.8927 | 0.6600 | 0.6607 | | 0.0473 | 2200.0 | 6600 | 2.0680 | 0.6575 | 0.6574 | | 0.046 | 2266.67 | 6800 | 1.8894 | 0.6526 | 0.6525 | | 0.0447 | 2333.33 | 7000 | 1.9051 | 0.6543 | 0.6542 | | 0.0444 | 2400.0 | 7200 | 2.1094 | 0.6511 | 0.6509 | | 0.0423 | 2466.67 | 7400 | 1.9778 | 0.6729 | 0.6737 | | 0.0411 | 2533.33 | 7600 | 1.9854 | 0.6618 | 0.6623 | | 0.0407 | 2600.0 | 7800 | 1.9483 | 0.6687 | 0.6688 | | 0.04 | 2666.67 | 8000 | 1.9649 | 0.6575 | 0.6574 | | 0.039 | 2733.33 | 8200 | 1.9644 | 0.6606 | 0.6607 | | 0.0388 | 2800.0 | 8400 | 2.0501 | 0.6670 | 0.6672 | | 0.0375 | 2866.67 | 8600 | 2.0106 | 0.6622 | 0.6623 | | 0.0368 | 2933.33 | 8800 | 2.0446 | 0.6586 | 0.6591 | | 0.0363 | 3000.0 | 9000 | 2.0473 | 0.6555 | 0.6558 | | 0.0363 | 3066.67 | 9200 | 2.0159 | 0.6602 | 0.6607 | | 0.0358 | 3133.33 | 9400 | 2.0621 | 0.6618 | 0.6623 | | 0.0355 | 3200.0 | 9600 | 2.0734 | 0.6686 | 0.6688 | | 0.0357 | 3266.67 | 9800 | 2.0886 | 0.6639 | 0.6639 | | 0.0358 | 3333.33 | 10000 | 2.0690 | 0.6606 | 0.6607 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T05:27:07+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_prom\_prom\_300\_tata-seqsight\_4096\_512\_15M-L32\_all ============================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset. It achieves the following results on the evaluation set: * Loss: 1.9825 * F1 Score: 0.6573 * Accuracy: 0.6574 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_notata-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.3061 - F1 Score: 0.8809 - Accuracy: 0.8809 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5323 | 9.52 | 200 | 0.4283 | 0.8033 | 0.8046 | | 0.4179 | 19.05 | 400 | 0.3918 | 0.8219 | 0.8221 | | 0.3857 | 28.57 | 600 | 0.3561 | 0.8403 | 0.8404 | | 0.3369 | 38.1 | 800 | 0.3149 | 0.8638 | 0.8640 | | 0.3068 | 47.62 | 1000 | 0.3020 | 0.8749 | 0.8749 | | 0.2867 | 57.14 | 1200 | 0.2971 | 0.8762 | 0.8762 | | 0.2709 | 66.67 | 1400 | 0.2933 | 0.8761 | 0.8764 | | 0.2563 | 76.19 | 1600 | 0.2878 | 0.8813 | 0.8813 | | 0.2443 | 85.71 | 1800 | 0.2873 | 0.8843 | 0.8843 | | 0.2348 | 95.24 | 2000 | 0.2866 | 0.8852 | 0.8852 | | 0.2272 | 104.76 | 2200 | 0.2826 | 0.8843 | 0.8845 | | 0.2198 | 114.29 | 2400 | 0.2857 | 0.8875 | 0.8875 | | 0.2147 | 123.81 | 2600 | 0.2840 | 0.8871 | 0.8871 | | 0.2095 | 133.33 | 2800 | 0.2849 | 0.8846 | 0.8847 | | 0.2061 | 142.86 | 3000 | 0.2945 | 0.8866 | 0.8866 | | 0.2015 | 152.38 | 3200 | 0.2890 | 0.8873 | 0.8873 | | 0.1982 | 161.9 | 3400 | 0.2824 | 0.8911 | 0.8911 | | 0.196 | 171.43 | 3600 | 0.2815 | 0.8903 | 0.8903 | | 0.1937 | 180.95 | 3800 | 0.2912 | 0.8868 | 0.8868 | | 0.1912 | 190.48 | 4000 | 0.2884 | 0.8858 | 0.8858 | | 0.188 | 200.0 | 4200 | 0.2868 | 0.8873 | 0.8873 | | 0.1871 | 209.52 | 4400 | 0.2966 | 0.8869 | 0.8869 | | 0.1836 | 219.05 | 4600 | 0.3002 | 0.8856 | 0.8856 | | 0.1803 | 228.57 | 4800 | 0.2935 | 0.8866 | 0.8866 | | 0.1802 | 238.1 | 5000 | 0.2988 | 0.8858 | 0.8858 | | 0.1781 | 247.62 | 5200 | 0.2998 | 0.8860 | 0.8860 | | 0.177 | 257.14 | 5400 | 0.2962 | 0.8898 | 0.8898 | | 0.1752 | 266.67 | 5600 | 0.2983 | 0.8877 | 0.8877 | | 0.1732 | 276.19 | 5800 | 0.2920 | 0.8869 | 0.8869 | | 0.1725 | 285.71 | 6000 | 0.2958 | 0.8879 | 0.8879 | | 0.1714 | 295.24 | 6200 | 0.3009 | 0.8879 | 0.8879 | | 0.1703 | 304.76 | 6400 | 0.2985 | 0.8866 | 0.8866 | | 0.169 | 314.29 | 6600 | 0.2975 | 0.8883 | 0.8883 | | 0.1675 | 323.81 | 6800 | 0.2965 | 0.8881 | 0.8881 | | 0.1671 | 333.33 | 7000 | 0.3114 | 0.8856 | 0.8856 | | 0.1653 | 342.86 | 7200 | 0.3036 | 0.8866 | 0.8866 | | 0.1651 | 352.38 | 7400 | 0.2980 | 0.8883 | 0.8883 | | 0.1639 | 361.9 | 7600 | 0.3052 | 0.8869 | 0.8869 | | 0.1629 | 371.43 | 7800 | 0.2982 | 0.8896 | 0.8896 | | 0.1624 | 380.95 | 8000 | 0.3036 | 0.8873 | 0.8873 | | 0.1616 | 390.48 | 8200 | 0.3030 | 0.8866 | 0.8866 | | 0.1614 | 400.0 | 8400 | 0.3024 | 0.8873 | 0.8873 | | 0.1603 | 409.52 | 8600 | 0.3034 | 0.8869 | 0.8869 | | 0.1596 | 419.05 | 8800 | 0.2998 | 0.8869 | 0.8869 | | 0.159 | 428.57 | 9000 | 0.3049 | 0.8890 | 0.8890 | | 0.1593 | 438.1 | 9200 | 0.3088 | 0.8864 | 0.8864 | | 0.1579 | 447.62 | 9400 | 0.3060 | 0.8877 | 0.8877 | | 0.158 | 457.14 | 9600 | 0.3023 | 0.8875 | 0.8875 | | 0.1581 | 466.67 | 9800 | 0.3043 | 0.8871 | 0.8871 | | 0.1581 | 476.19 | 10000 | 0.3046 | 0.8875 | 0.8875 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T05:27:34+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_prom\_prom\_300\_notata-seqsight\_4096\_512\_15M-L32\_all ============================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset. It achieves the following results on the evaluation set: * Loss: 0.3061 * F1 Score: 0.8809 * Accuracy: 0.8809 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-pii-masking-finetune This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.0521 - eval_runtime: 4808.8041 - eval_samples_per_second: 1.652 - eval_steps_per_second: 0.207 - epoch: 0.0067 - step: 100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - training_steps: 500 - mixed_precision_training: Native AMP ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral-pii-masking-finetune", "results": []}]}
lizashr/mistral-pii-masking-finetune
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-17T05:28:11+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
# mistral-pii-masking-finetune This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.0521 - eval_runtime: 4808.8041 - eval_samples_per_second: 1.652 - eval_steps_per_second: 0.207 - epoch: 0.0067 - step: 100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - training_steps: 500 - mixed_precision_training: Native AMP ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# mistral-pii-masking-finetune\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.0521\n- eval_runtime: 4808.8041\n- eval_samples_per_second: 1.652\n- eval_steps_per_second: 0.207\n- epoch: 0.0067\n- step: 100", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2.5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1\n- training_steps: 500\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "# mistral-pii-masking-finetune\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.0521\n- eval_runtime: 4808.8041\n- eval_samples_per_second: 1.652\n- eval_steps_per_second: 0.207\n- epoch: 0.0067\n- step: 100", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2.5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1\n- training_steps: 500\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "271.48 +/- 40.50", "name": "mean_reward", "verified": false}]}]}]}
user87441257/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-17T05:28:37+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
dzungPaduahsgs/Mistral7Bcleaning
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-17T05:29:23+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
null
# Spaetzle-v69-7b This is a progressive (mostly dare-ties, but also slerp) merge with the intention of a suitable compromise for English and German local tasks. There is also an [unquantized](https://huggingface.co/cstr/Spaetzle-v69-7b) version. It achieves (running quantized) in - German EQ Bench: Score (v2_de): 62.59 (Parseable: 171.0). - English EQ Bench: Score (v2): 76.43 (Parseable: 171.0). It should work sufficiently well with ChatML prompt template (for all merged models should have seen ChatML prompts at least in DPO stage). Spaetzle-v69-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora) * [cstr/Spaetzle-v68-7b](https://huggingface.co/cstr/Spaetzle-v68-7b) The merge tree in total involves to following original models: - [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora) - [mayflowergmbh/Wiedervereinigung-7b-dpo](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo) - [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B) - [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B) - [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B) - [occiglot/occiglot-7b-de-en-instruct](https://huggingface.co/occiglot/occiglot-7b-de-en-instruct) - [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) - [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B) - [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1) - [LeoLM/leo-mistral-hessianai-7b](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b) - [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix) - [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) - [malteos/hermeo-7b](https://huggingface.co/malteos/hermeo-7b) - [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2) - [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser) ## 🧩 Configuration ```yaml models: - model: cstr/Spaetzle-v68-7b # no parameters necessary for base model - model: abideen/AlphaMonarch-dora parameters: density: 0.60 weight: 0.30 merge_method: dare_ties base_model: cstr/Spaetzle-v68-7b parameters: int8_mask: true dtype: bfloat16 random_seed: 0 tokenizer_source: base ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "cstr/Spaetzle-v69-7b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "cc-by-nc-4.0", "tags": ["merge", "mergekit", "lazymergekit", "abideen/AlphaMonarch-dora"], "base_model": ["abideen/AlphaMonarch-dora"]}
cstr/Spaetzle-v69-7b-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "abideen/AlphaMonarch-dora", "base_model:abideen/AlphaMonarch-dora", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-17T05:29:45+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #abideen/AlphaMonarch-dora #base_model-abideen/AlphaMonarch-dora #license-cc-by-nc-4.0 #region-us
# Spaetzle-v69-7b This is a progressive (mostly dare-ties, but also slerp) merge with the intention of a suitable compromise for English and German local tasks. There is also an unquantized version. It achieves (running quantized) in - German EQ Bench: Score (v2_de): 62.59 (Parseable: 171.0). - English EQ Bench: Score (v2): 76.43 (Parseable: 171.0). It should work sufficiently well with ChatML prompt template (for all merged models should have seen ChatML prompts at least in DPO stage). Spaetzle-v69-7b is a merge of the following models using LazyMergekit: * abideen/AlphaMonarch-dora * cstr/Spaetzle-v68-7b The merge tree in total involves to following original models: - abideen/AlphaMonarch-dora - mayflowergmbh/Wiedervereinigung-7b-dpo - flemmingmiguel/NeuDist-Ro-7B - ResplendentAI/Flora_DPO_7B - yleo/EmertonMonarch-7B - occiglot/occiglot-7b-de-en-instruct - OpenPipe/mistral-ft-optimized-1227 - yleo/EmertonMonarch-7B - DiscoResearch/DiscoLM_German_7b_v1 - LeoLM/leo-mistral-hessianai-7b - DRXD1000/Phoenix - VAGOsolutions/SauerkrautLM-7b-v1-mistral - malteos/hermeo-7b - FelixChao/WestSeverus-7B-DPO-v2 - cognitivecomputations/openchat-3.5-0106-laser ## Configuration ## Usage
[ "# Spaetzle-v69-7b\nThis is a progressive (mostly dare-ties, but also slerp) merge with the intention of a suitable compromise for English and German local tasks.\n\nThere is also an unquantized version.\n\nIt achieves (running quantized) in \n- German EQ Bench: Score (v2_de): 62.59 (Parseable: 171.0).\n- English EQ Bench: Score (v2): 76.43 (Parseable: 171.0).\n\nIt should work sufficiently well with ChatML prompt template (for all merged models should have seen ChatML prompts at least in DPO stage).\n\nSpaetzle-v69-7b is a merge of the following models using LazyMergekit:\n* abideen/AlphaMonarch-dora\n* cstr/Spaetzle-v68-7b\n\nThe merge tree in total involves to following original models:\n - abideen/AlphaMonarch-dora\n - mayflowergmbh/Wiedervereinigung-7b-dpo\n - flemmingmiguel/NeuDist-Ro-7B\n - ResplendentAI/Flora_DPO_7B\n - yleo/EmertonMonarch-7B\n - occiglot/occiglot-7b-de-en-instruct \n - OpenPipe/mistral-ft-optimized-1227\n - yleo/EmertonMonarch-7B\n - DiscoResearch/DiscoLM_German_7b_v1\n - LeoLM/leo-mistral-hessianai-7b\n - DRXD1000/Phoenix\n - VAGOsolutions/SauerkrautLM-7b-v1-mistral\n - malteos/hermeo-7b\n - FelixChao/WestSeverus-7B-DPO-v2\n - cognitivecomputations/openchat-3.5-0106-laser", "## Configuration", "## Usage" ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #abideen/AlphaMonarch-dora #base_model-abideen/AlphaMonarch-dora #license-cc-by-nc-4.0 #region-us \n", "# Spaetzle-v69-7b\nThis is a progressive (mostly dare-ties, but also slerp) merge with the intention of a suitable compromise for English and German local tasks.\n\nThere is also an unquantized version.\n\nIt achieves (running quantized) in \n- German EQ Bench: Score (v2_de): 62.59 (Parseable: 171.0).\n- English EQ Bench: Score (v2): 76.43 (Parseable: 171.0).\n\nIt should work sufficiently well with ChatML prompt template (for all merged models should have seen ChatML prompts at least in DPO stage).\n\nSpaetzle-v69-7b is a merge of the following models using LazyMergekit:\n* abideen/AlphaMonarch-dora\n* cstr/Spaetzle-v68-7b\n\nThe merge tree in total involves to following original models:\n - abideen/AlphaMonarch-dora\n - mayflowergmbh/Wiedervereinigung-7b-dpo\n - flemmingmiguel/NeuDist-Ro-7B\n - ResplendentAI/Flora_DPO_7B\n - yleo/EmertonMonarch-7B\n - occiglot/occiglot-7b-de-en-instruct \n - OpenPipe/mistral-ft-optimized-1227\n - yleo/EmertonMonarch-7B\n - DiscoResearch/DiscoLM_German_7b_v1\n - LeoLM/leo-mistral-hessianai-7b\n - DRXD1000/Phoenix\n - VAGOsolutions/SauerkrautLM-7b-v1-mistral\n - malteos/hermeo-7b\n - FelixChao/WestSeverus-7B-DPO-v2\n - cognitivecomputations/openchat-3.5-0106-laser", "## Configuration", "## Usage" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral7binstruct_summarize This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.4630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6134 | 0.22 | 25 | 1.5156 | | 1.5269 | 0.43 | 50 | 1.4630 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral7binstruct_summarize", "results": []}]}
allanctan-ai/mistral7binstruct_summarize
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-17T05:30:26+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
mistral7binstruct\_summarize ============================ This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset. It achieves the following results on the evaluation set: * Loss: 1.4630 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: constant * lr\_scheduler\_warmup\_steps: 0.03 * training\_steps: 50 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# microsoft/rho-math-7b-v0.1 AWQ - Model creator: [microsoft](https://huggingface.co/microsoft) - Original model: [rho-math-7b-v0.1](https://huggingface.co/rho-math-7b-v0.1) ## Model summary Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
{"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["mistral", "4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "chatml", "nlp", "math"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/rho-math-7b-v0.1-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "chatml", "nlp", "math", "en", "license:mit", "text-generation-inference", "region:us" ]
null
2024-04-17T05:32:27+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #nlp #math #en #license-mit #text-generation-inference #region-us
# microsoft/rho-math-7b-v0.1 AWQ - Model creator: microsoft - Original model: rho-math-7b-v0.1 ## Model summary Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
[ "# microsoft/rho-math-7b-v0.1 AWQ\n\n- Model creator: microsoft\n- Original model: rho-math-7b-v0.1", "## Model summary\n\nRho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution." ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #nlp #math #en #license-mit #text-generation-inference #region-us \n", "# microsoft/rho-math-7b-v0.1 AWQ\n\n- Model creator: microsoft\n- Original model: rho-math-7b-v0.1", "## Model summary\n\nRho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution." ]
null
null
Model weights for SigLIP-Phi2(LoRA) VLM. Scripts to build and train the model available at [Prismatic-SigLIP-Phi2-LoRA-VLM](https://github.com/NMS05/Prismatic-SigLIP-Phi2-LoRA-VLM).
{"language": ["en"]}
nms05/SigLIP_Phi2_LoRA_VLM
null
[ "en", "region:us" ]
null
2024-04-17T05:33:46+00:00
[]
[ "en" ]
TAGS #en #region-us
Model weights for SigLIP-Phi2(LoRA) VLM. Scripts to build and train the model available at Prismatic-SigLIP-Phi2-LoRA-VLM.
[]
[ "TAGS\n#en #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.5539 - F1 Score: 0.7302 - Accuracy: 0.7302 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1536 - eval_batch_size: 1536 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6272 | 6.45 | 200 | 0.5792 | 0.6955 | 0.6966 | | 0.5707 | 12.9 | 400 | 0.5665 | 0.7114 | 0.7118 | | 0.5549 | 19.35 | 600 | 0.5597 | 0.7165 | 0.7177 | | 0.5426 | 25.81 | 800 | 0.5621 | 0.7190 | 0.7193 | | 0.5281 | 32.26 | 1000 | 0.5555 | 0.7192 | 0.7209 | | 0.5169 | 38.71 | 1200 | 0.5523 | 0.7243 | 0.7243 | | 0.5073 | 45.16 | 1400 | 0.5499 | 0.7283 | 0.7284 | | 0.5004 | 51.61 | 1600 | 0.5424 | 0.7282 | 0.7282 | | 0.492 | 58.06 | 1800 | 0.5419 | 0.7268 | 0.7269 | | 0.4857 | 64.52 | 2000 | 0.5490 | 0.7304 | 0.7304 | | 0.4812 | 70.97 | 2200 | 0.5578 | 0.7299 | 0.7299 | | 0.4751 | 77.42 | 2400 | 0.5539 | 0.7261 | 0.7262 | | 0.4708 | 83.87 | 2600 | 0.5426 | 0.7287 | 0.7287 | | 0.468 | 90.32 | 2800 | 0.5534 | 0.7287 | 0.7292 | | 0.4645 | 96.77 | 3000 | 0.5548 | 0.7300 | 0.7302 | | 0.4613 | 103.23 | 3200 | 0.5490 | 0.7339 | 0.7340 | | 0.4579 | 109.68 | 3400 | 0.5584 | 0.7315 | 0.7316 | | 0.4551 | 116.13 | 3600 | 0.5717 | 0.7303 | 0.7309 | | 0.4528 | 122.58 | 3800 | 0.5512 | 0.7273 | 0.7274 | | 0.4503 | 129.03 | 4000 | 0.5631 | 0.7334 | 0.7334 | | 0.4457 | 135.48 | 4200 | 0.5729 | 0.7318 | 0.7319 | | 0.4449 | 141.94 | 4400 | 0.5590 | 0.7309 | 0.7309 | | 0.442 | 148.39 | 4600 | 0.5747 | 0.7260 | 0.7272 | | 0.4397 | 154.84 | 4800 | 0.5663 | 0.7305 | 0.7307 | | 0.4372 | 161.29 | 5000 | 0.5739 | 0.7267 | 0.7272 | | 0.4361 | 167.74 | 5200 | 0.5678 | 0.7278 | 0.7279 | | 0.4331 | 174.19 | 5400 | 0.5674 | 0.7301 | 0.7301 | | 0.4309 | 180.65 | 5600 | 0.5603 | 0.7280 | 0.7280 | | 0.4282 | 187.1 | 5800 | 0.5742 | 0.7304 | 0.7304 | | 0.4264 | 193.55 | 6000 | 0.5788 | 0.7259 | 0.7262 | | 0.425 | 200.0 | 6200 | 0.5643 | 0.7290 | 0.7291 | | 0.4235 | 206.45 | 6400 | 0.5807 | 0.7294 | 0.7297 | | 0.4219 | 212.9 | 6600 | 0.5839 | 0.7262 | 0.7265 | | 0.4198 | 219.35 | 6800 | 0.5887 | 0.7287 | 0.7291 | | 0.4185 | 225.81 | 7000 | 0.5837 | 0.7315 | 0.7316 | | 0.4163 | 232.26 | 7200 | 0.5854 | 0.7245 | 0.7253 | | 0.4161 | 238.71 | 7400 | 0.5796 | 0.7261 | 0.7267 | | 0.4129 | 245.16 | 7600 | 0.5821 | 0.7287 | 0.7289 | | 0.4113 | 251.61 | 7800 | 0.5813 | 0.7234 | 0.7238 | | 0.4097 | 258.06 | 8000 | 0.5916 | 0.7265 | 0.7270 | | 0.4089 | 264.52 | 8200 | 0.5927 | 0.7253 | 0.7257 | | 0.4063 | 270.97 | 8400 | 0.5928 | 0.7235 | 0.7240 | | 0.4068 | 277.42 | 8600 | 0.5872 | 0.7264 | 0.7265 | | 0.4056 | 283.87 | 8800 | 0.5967 | 0.7215 | 0.7223 | | 0.4047 | 290.32 | 9000 | 0.5896 | 0.7249 | 0.7253 | | 0.4037 | 296.77 | 9200 | 0.5942 | 0.7249 | 0.7253 | | 0.4041 | 303.23 | 9400 | 0.5942 | 0.7238 | 0.7243 | | 0.4025 | 309.68 | 9600 | 0.5948 | 0.7215 | 0.7221 | | 0.4032 | 316.13 | 9800 | 0.5945 | 0.7242 | 0.7247 | | 0.403 | 322.58 | 10000 | 0.5945 | 0.7242 | 0.7247 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T05:33:51+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_prom\_prom\_core\_all-seqsight\_4096\_512\_15M-L32\_all ============================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset. It achieves the following results on the evaluation set: * Loss: 0.5539 * F1 Score: 0.7302 * Accuracy: 0.7302 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 1536 * eval\_batch\_size: 1536 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
muskaanthawani/Enlighten_Instruct
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-17T05:34:37+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-chat-hf_esnli_1000_1ep This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 0 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "Llama-2-7b-chat-hf_esnli_1000_1ep", "results": []}]}
mohsenfayyaz/Llama-2-7b-chat-hf_esnli_1000_1ep
null
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-2-7b-chat-hf", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T05:35:13+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #base_model-meta-llama/Llama-2-7b-chat-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Llama-2-7b-chat-hf_esnli_1000_1ep This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 0 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
[ "# Llama-2-7b-chat-hf_esnli_1000_1ep\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 16\n- seed: 0\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #base_model-meta-llama/Llama-2-7b-chat-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Llama-2-7b-chat-hf_esnli_1000_1ep\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 16\n- seed: 0\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_notata-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.5492 - F1 Score: 0.7441 - Accuracy: 0.7441 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6095 | 9.52 | 200 | 0.5476 | 0.7247 | 0.7249 | | 0.5528 | 19.05 | 400 | 0.5363 | 0.7369 | 0.7370 | | 0.5333 | 28.57 | 600 | 0.5274 | 0.7486 | 0.7486 | | 0.5185 | 38.1 | 800 | 0.5233 | 0.7511 | 0.7511 | | 0.5044 | 47.62 | 1000 | 0.5213 | 0.7543 | 0.7543 | | 0.4923 | 57.14 | 1200 | 0.5152 | 0.7541 | 0.7541 | | 0.4801 | 66.67 | 1400 | 0.5189 | 0.7525 | 0.7526 | | 0.471 | 76.19 | 1600 | 0.5167 | 0.7513 | 0.7513 | | 0.463 | 85.71 | 1800 | 0.5257 | 0.7511 | 0.7515 | | 0.456 | 95.24 | 2000 | 0.5220 | 0.7517 | 0.7516 | | 0.4503 | 104.76 | 2200 | 0.5116 | 0.7537 | 0.7537 | | 0.4438 | 114.29 | 2400 | 0.5239 | 0.7541 | 0.7541 | | 0.4404 | 123.81 | 2600 | 0.5267 | 0.7470 | 0.7475 | | 0.4351 | 133.33 | 2800 | 0.5319 | 0.7540 | 0.7541 | | 0.4317 | 142.86 | 3000 | 0.5252 | 0.7543 | 0.7547 | | 0.4274 | 152.38 | 3200 | 0.5345 | 0.7537 | 0.7539 | | 0.4245 | 161.9 | 3400 | 0.5278 | 0.7551 | 0.7552 | | 0.4212 | 171.43 | 3600 | 0.5352 | 0.7562 | 0.7562 | | 0.4174 | 180.95 | 3800 | 0.5336 | 0.7527 | 0.7533 | | 0.4144 | 190.48 | 4000 | 0.5344 | 0.7533 | 0.7537 | | 0.4118 | 200.0 | 4200 | 0.5304 | 0.7526 | 0.7526 | | 0.4097 | 209.52 | 4400 | 0.5387 | 0.7556 | 0.7558 | | 0.4066 | 219.05 | 4600 | 0.5394 | 0.7585 | 0.7586 | | 0.4037 | 228.57 | 4800 | 0.5416 | 0.7569 | 0.7569 | | 0.4005 | 238.1 | 5000 | 0.5492 | 0.7587 | 0.7590 | | 0.3979 | 247.62 | 5200 | 0.5432 | 0.7592 | 0.7592 | | 0.3946 | 257.14 | 5400 | 0.5617 | 0.7558 | 0.7560 | | 0.3937 | 266.67 | 5600 | 0.5517 | 0.7528 | 0.7530 | | 0.3892 | 276.19 | 5800 | 0.5647 | 0.7506 | 0.7513 | | 0.3878 | 285.71 | 6000 | 0.5547 | 0.7533 | 0.7535 | | 0.3857 | 295.24 | 6200 | 0.5618 | 0.7537 | 0.7537 | | 0.3839 | 304.76 | 6400 | 0.5768 | 0.7463 | 0.7475 | | 0.3815 | 314.29 | 6600 | 0.5619 | 0.7524 | 0.7526 | | 0.3789 | 323.81 | 6800 | 0.5675 | 0.7518 | 0.7518 | | 0.3765 | 333.33 | 7000 | 0.5781 | 0.7506 | 0.7509 | | 0.3749 | 342.86 | 7200 | 0.5642 | 0.7513 | 0.7516 | | 0.3736 | 352.38 | 7400 | 0.5723 | 0.7506 | 0.7507 | | 0.3725 | 361.9 | 7600 | 0.5831 | 0.7492 | 0.7496 | | 0.3696 | 371.43 | 7800 | 0.5718 | 0.7499 | 0.7500 | | 0.368 | 380.95 | 8000 | 0.5719 | 0.7463 | 0.7466 | | 0.3671 | 390.48 | 8200 | 0.5740 | 0.7506 | 0.7507 | | 0.3661 | 400.0 | 8400 | 0.5788 | 0.7471 | 0.7473 | | 0.3649 | 409.52 | 8600 | 0.5805 | 0.7466 | 0.7469 | | 0.3629 | 419.05 | 8800 | 0.5804 | 0.7448 | 0.7452 | | 0.3618 | 428.57 | 9000 | 0.5790 | 0.7456 | 0.7458 | | 0.3622 | 438.1 | 9200 | 0.5769 | 0.7468 | 0.7469 | | 0.3607 | 447.62 | 9400 | 0.5825 | 0.7486 | 0.7488 | | 0.3607 | 457.14 | 9600 | 0.5856 | 0.7457 | 0.7462 | | 0.3607 | 466.67 | 9800 | 0.5805 | 0.7430 | 0.7434 | | 0.3597 | 476.19 | 10000 | 0.5815 | 0.7441 | 0.7445 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T05:37:23+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_prom\_prom\_core\_notata-seqsight\_4096\_512\_15M-L32\_all =============================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset. It achieves the following results on the evaluation set: * Loss: 0.5492 * F1 Score: 0.7441 * Accuracy: 0.7441 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/bongchoi/MoMo-70B-V1.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MoMo-70B-V1.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama2", "library_name": "transformers", "base_model": "bongchoi/MoMo-70B-V1.1", "quantized_by": "mradermacher"}
mradermacher/MoMo-70B-V1.1-i1-GGUF
null
[ "transformers", "gguf", "en", "base_model:bongchoi/MoMo-70B-V1.1", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:37:47+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-bongchoi/MoMo-70B-V1.1 #license-llama2 #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-bongchoi/MoMo-70B-V1.1 #license-llama2 #endpoints_compatible #region-us \n" ]
text-generation
transformers
# microsoft/rho-math-7b-interpreter-v0.1 AWQ - Model creator: [microsoft](https://huggingface.co/microsoft) - Original model: [rho-math-7b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) ## Model summary Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
{"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["mistral", "4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "chatml", "nlp", "math"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/rho-math-7b-interpreter-v0.1-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "chatml", "nlp", "math", "en", "license:mit", "text-generation-inference", "region:us" ]
null
2024-04-17T05:40:21+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #nlp #math #en #license-mit #text-generation-inference #region-us
# microsoft/rho-math-7b-interpreter-v0.1 AWQ - Model creator: microsoft - Original model: rho-math-7b-interpreter-v0.1 ## Model summary Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
[ "# microsoft/rho-math-7b-interpreter-v0.1 AWQ\n\n- Model creator: microsoft\n- Original model: rho-math-7b-interpreter-v0.1", "## Model summary\n\nRho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution." ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #nlp #math #en #license-mit #text-generation-inference #region-us \n", "# microsoft/rho-math-7b-interpreter-v0.1 AWQ\n\n- Model creator: microsoft\n- Original model: rho-math-7b-interpreter-v0.1", "## Model summary\n\nRho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution." ]
null
null
# MythicalCow/llama2-7b-openhermes-15k-mini-Q4_K_M-GGUF This model was converted to GGUF format from [`Raghava45/llama2-7b-openhermes-15k-mini`](https://huggingface.co/Raghava45/llama2-7b-openhermes-15k-mini) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Raghava45/llama2-7b-openhermes-15k-mini) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo MythicalCow/llama2-7b-openhermes-15k-mini-Q4_K_M-GGUF --model llama2-7b-openhermes-15k-mini.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo MythicalCow/llama2-7b-openhermes-15k-mini-Q4_K_M-GGUF --model llama2-7b-openhermes-15k-mini.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama2-7b-openhermes-15k-mini.Q4_K_M.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"]}
MythicalCow/llama2-7b-openhermes-15k-mini-Q4_K_M-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-17T05:42:56+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# MythicalCow/llama2-7b-openhermes-15k-mini-Q4_K_M-GGUF This model was converted to GGUF format from 'Raghava45/llama2-7b-openhermes-15k-mini' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# MythicalCow/llama2-7b-openhermes-15k-mini-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Raghava45/llama2-7b-openhermes-15k-mini' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# MythicalCow/llama2-7b-openhermes-15k-mini-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Raghava45/llama2-7b-openhermes-15k-mini' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 1.2556 - F1 Score: 0.7650 - Accuracy: 0.7651 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:| | 0.523 | 66.67 | 200 | 0.5727 | 0.7321 | 0.7325 | | 0.3368 | 133.33 | 400 | 0.6622 | 0.7243 | 0.7243 | | 0.2487 | 200.0 | 600 | 0.7770 | 0.7241 | 0.7243 | | 0.1983 | 266.67 | 800 | 0.8722 | 0.7422 | 0.7423 | | 0.1716 | 333.33 | 1000 | 0.8542 | 0.7339 | 0.7341 | | 0.1513 | 400.0 | 1200 | 0.9312 | 0.7320 | 0.7325 | | 0.1382 | 466.67 | 1400 | 0.8567 | 0.7421 | 0.7423 | | 0.1261 | 533.33 | 1600 | 0.9475 | 0.7313 | 0.7325 | | 0.1151 | 600.0 | 1800 | 0.9532 | 0.7384 | 0.7390 | | 0.1059 | 666.67 | 2000 | 0.9111 | 0.7471 | 0.7471 | | 0.0962 | 733.33 | 2200 | 0.9675 | 0.7367 | 0.7374 | | 0.0884 | 800.0 | 2400 | 0.9571 | 0.7553 | 0.7553 | | 0.0824 | 866.67 | 2600 | 0.9949 | 0.7503 | 0.7504 | | 0.0741 | 933.33 | 2800 | 1.0339 | 0.7500 | 0.7504 | | 0.0679 | 1000.0 | 3000 | 1.1008 | 0.7400 | 0.7406 | | 0.0632 | 1066.67 | 3200 | 0.9800 | 0.7404 | 0.7406 | | 0.0601 | 1133.33 | 3400 | 1.0085 | 0.7299 | 0.7308 | | 0.0547 | 1200.0 | 3600 | 1.0480 | 0.7553 | 0.7553 | | 0.0511 | 1266.67 | 3800 | 1.0642 | 0.7586 | 0.7586 | | 0.0479 | 1333.33 | 4000 | 1.0699 | 0.7504 | 0.7504 | | 0.044 | 1400.0 | 4200 | 1.1045 | 0.7422 | 0.7423 | | 0.0429 | 1466.67 | 4400 | 1.0629 | 0.7520 | 0.7520 | | 0.0394 | 1533.33 | 4600 | 1.1421 | 0.7552 | 0.7553 | | 0.0379 | 1600.0 | 4800 | 1.0744 | 0.7455 | 0.7455 | | 0.0355 | 1666.67 | 5000 | 1.1249 | 0.7520 | 0.7520 | | 0.0341 | 1733.33 | 5200 | 1.1211 | 0.7503 | 0.7504 | | 0.0325 | 1800.0 | 5400 | 1.1611 | 0.7615 | 0.7618 | | 0.0318 | 1866.67 | 5600 | 1.0955 | 0.7633 | 0.7635 | | 0.0291 | 1933.33 | 5800 | 1.2250 | 0.7566 | 0.7569 | | 0.0284 | 2000.0 | 6000 | 1.2232 | 0.7520 | 0.7520 | | 0.0274 | 2066.67 | 6200 | 1.2207 | 0.7602 | 0.7602 | | 0.026 | 2133.33 | 6400 | 1.2717 | 0.7453 | 0.7455 | | 0.0252 | 2200.0 | 6600 | 1.2133 | 0.7601 | 0.7602 | | 0.0239 | 2266.67 | 6800 | 1.1838 | 0.7585 | 0.7586 | | 0.0231 | 2333.33 | 7000 | 1.2065 | 0.7520 | 0.7520 | | 0.0235 | 2400.0 | 7200 | 1.2239 | 0.7503 | 0.7504 | | 0.0228 | 2466.67 | 7400 | 1.2183 | 0.7421 | 0.7423 | | 0.021 | 2533.33 | 7600 | 1.2155 | 0.7519 | 0.7520 | | 0.022 | 2600.0 | 7800 | 1.1825 | 0.7585 | 0.7586 | | 0.0201 | 2666.67 | 8000 | 1.2576 | 0.7586 | 0.7586 | | 0.0202 | 2733.33 | 8200 | 1.2602 | 0.7585 | 0.7586 | | 0.0194 | 2800.0 | 8400 | 1.3151 | 0.7550 | 0.7553 | | 0.0193 | 2866.67 | 8600 | 1.3162 | 0.7537 | 0.7537 | | 0.0189 | 2933.33 | 8800 | 1.2777 | 0.7553 | 0.7553 | | 0.0188 | 3000.0 | 9000 | 1.2548 | 0.7634 | 0.7635 | | 0.0187 | 3066.67 | 9200 | 1.2615 | 0.7667 | 0.7667 | | 0.018 | 3133.33 | 9400 | 1.2734 | 0.7602 | 0.7602 | | 0.0179 | 3200.0 | 9600 | 1.2599 | 0.7635 | 0.7635 | | 0.0176 | 3266.67 | 9800 | 1.2896 | 0.7651 | 0.7651 | | 0.018 | 3333.33 | 10000 | 1.2828 | 0.7618 | 0.7618 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T05:43:54+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_prom\_prom\_core\_tata-seqsight\_4096\_512\_15M-L32\_all ============================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset. It achieves the following results on the evaluation set: * Loss: 1.2556 * F1 Score: 0.7650 * Accuracy: 0.7651 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.4056 - F1 Score: 0.8417 - Accuracy: 0.8417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5665 | 8.33 | 200 | 0.4711 | 0.7802 | 0.7811 | | 0.4722 | 16.67 | 400 | 0.4451 | 0.7928 | 0.7931 | | 0.4452 | 25.0 | 600 | 0.4281 | 0.7997 | 0.8003 | | 0.4142 | 33.33 | 800 | 0.3891 | 0.8238 | 0.8240 | | 0.3836 | 41.67 | 1000 | 0.3717 | 0.8338 | 0.8338 | | 0.3644 | 50.0 | 1200 | 0.3632 | 0.8413 | 0.8414 | | 0.3507 | 58.33 | 1400 | 0.3560 | 0.8427 | 0.8427 | | 0.3383 | 66.67 | 1600 | 0.3546 | 0.8439 | 0.8439 | | 0.3279 | 75.0 | 1800 | 0.3544 | 0.8463 | 0.8463 | | 0.3213 | 83.33 | 2000 | 0.3528 | 0.8446 | 0.8446 | | 0.3149 | 91.67 | 2200 | 0.3586 | 0.8435 | 0.8436 | | 0.3054 | 100.0 | 2400 | 0.3518 | 0.8454 | 0.8454 | | 0.3004 | 108.33 | 2600 | 0.3496 | 0.8480 | 0.8480 | | 0.2958 | 116.67 | 2800 | 0.3534 | 0.8461 | 0.8461 | | 0.291 | 125.0 | 3000 | 0.3493 | 0.8494 | 0.8495 | | 0.2872 | 133.33 | 3200 | 0.3515 | 0.8469 | 0.8470 | | 0.2831 | 141.67 | 3400 | 0.3504 | 0.8491 | 0.8492 | | 0.2814 | 150.0 | 3600 | 0.3479 | 0.8506 | 0.8507 | | 0.2769 | 158.33 | 3800 | 0.3566 | 0.8494 | 0.8495 | | 0.2732 | 166.67 | 4000 | 0.3538 | 0.8505 | 0.8505 | | 0.2718 | 175.0 | 4200 | 0.3622 | 0.8501 | 0.8503 | | 0.2706 | 183.33 | 4400 | 0.3550 | 0.8519 | 0.8520 | | 0.2679 | 191.67 | 4600 | 0.3555 | 0.8535 | 0.8535 | | 0.2641 | 200.0 | 4800 | 0.3530 | 0.8524 | 0.8524 | | 0.2613 | 208.33 | 5000 | 0.3536 | 0.8529 | 0.8529 | | 0.2598 | 216.67 | 5200 | 0.3503 | 0.8539 | 0.8539 | | 0.2584 | 225.0 | 5400 | 0.3609 | 0.8526 | 0.8527 | | 0.2571 | 233.33 | 5600 | 0.3541 | 0.8541 | 0.8541 | | 0.2546 | 241.67 | 5800 | 0.3672 | 0.8556 | 0.8557 | | 0.2523 | 250.0 | 6000 | 0.3612 | 0.8536 | 0.8537 | | 0.2506 | 258.33 | 6200 | 0.3567 | 0.8544 | 0.8544 | | 0.2494 | 266.67 | 6400 | 0.3671 | 0.8545 | 0.8546 | | 0.2473 | 275.0 | 6600 | 0.3579 | 0.8559 | 0.8559 | | 0.2465 | 283.33 | 6800 | 0.3654 | 0.8557 | 0.8557 | | 0.2447 | 291.67 | 7000 | 0.3649 | 0.8571 | 0.8571 | | 0.244 | 300.0 | 7200 | 0.3659 | 0.8579 | 0.8579 | | 0.2416 | 308.33 | 7400 | 0.3625 | 0.8574 | 0.8574 | | 0.2403 | 316.67 | 7600 | 0.3664 | 0.8576 | 0.8576 | | 0.2407 | 325.0 | 7800 | 0.3678 | 0.8589 | 0.8590 | | 0.2386 | 333.33 | 8000 | 0.3759 | 0.8573 | 0.8574 | | 0.2387 | 341.67 | 8200 | 0.3676 | 0.8579 | 0.8579 | | 0.236 | 350.0 | 8400 | 0.3745 | 0.8564 | 0.8564 | | 0.2378 | 358.33 | 8600 | 0.3682 | 0.8574 | 0.8574 | | 0.2364 | 366.67 | 8800 | 0.3765 | 0.8570 | 0.8571 | | 0.2362 | 375.0 | 9000 | 0.3663 | 0.8586 | 0.8586 | | 0.2344 | 383.33 | 9200 | 0.3676 | 0.8577 | 0.8578 | | 0.2353 | 391.67 | 9400 | 0.3689 | 0.8574 | 0.8574 | | 0.2332 | 400.0 | 9600 | 0.3696 | 0.8581 | 0.8581 | | 0.2342 | 408.33 | 9800 | 0.3693 | 0.8576 | 0.8576 | | 0.2343 | 416.67 | 10000 | 0.3697 | 0.8583 | 0.8583 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T05:44:00+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_prom\_prom\_300\_all-seqsight\_4096\_512\_15M-L32\_all =========================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset. It achieves the following results on the evaluation set: * Loss: 0.4056 * F1 Score: 0.8417 * Accuracy: 0.8417 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K14ac-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.6675 - F1 Score: 0.6076 - Accuracy: 0.6097 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6649 | 15.38 | 200 | 0.6376 | 0.6224 | 0.6278 | | 0.6178 | 30.77 | 400 | 0.6472 | 0.6277 | 0.6306 | | 0.6008 | 46.15 | 600 | 0.6538 | 0.6277 | 0.6293 | | 0.5876 | 61.54 | 800 | 0.6574 | 0.6243 | 0.6315 | | 0.5748 | 76.92 | 1000 | 0.6681 | 0.6219 | 0.6236 | | 0.5637 | 92.31 | 1200 | 0.6782 | 0.6144 | 0.6172 | | 0.5542 | 107.69 | 1400 | 0.6948 | 0.6220 | 0.6230 | | 0.5456 | 123.08 | 1600 | 0.6752 | 0.6126 | 0.6160 | | 0.5385 | 138.46 | 1800 | 0.7105 | 0.6133 | 0.6127 | | 0.5285 | 153.85 | 2000 | 0.7027 | 0.6209 | 0.6209 | | 0.5194 | 169.23 | 2200 | 0.7493 | 0.6179 | 0.6163 | | 0.5137 | 184.62 | 2400 | 0.7294 | 0.6146 | 0.6124 | | 0.5027 | 200.0 | 2600 | 0.7311 | 0.6163 | 0.6209 | | 0.494 | 215.38 | 2800 | 0.7639 | 0.6104 | 0.6118 | | 0.4849 | 230.77 | 3000 | 0.7514 | 0.6126 | 0.6172 | | 0.4765 | 246.15 | 3200 | 0.7655 | 0.6052 | 0.6039 | | 0.4689 | 261.54 | 3400 | 0.7741 | 0.6124 | 0.6130 | | 0.461 | 276.92 | 3600 | 0.7976 | 0.6055 | 0.6039 | | 0.4527 | 292.31 | 3800 | 0.7946 | 0.6050 | 0.6039 | | 0.446 | 307.69 | 4000 | 0.8125 | 0.6051 | 0.6030 | | 0.4417 | 323.08 | 4200 | 0.8194 | 0.6031 | 0.6012 | | 0.4323 | 338.46 | 4400 | 0.8204 | 0.6041 | 0.6030 | | 0.4271 | 353.85 | 4600 | 0.8297 | 0.6100 | 0.6091 | | 0.4218 | 369.23 | 4800 | 0.8436 | 0.6119 | 0.6103 | | 0.4149 | 384.62 | 5000 | 0.8405 | 0.6018 | 0.5994 | | 0.4086 | 400.0 | 5200 | 0.8376 | 0.6142 | 0.6136 | | 0.404 | 415.38 | 5400 | 0.8474 | 0.6100 | 0.6085 | | 0.4 | 430.77 | 5600 | 0.8542 | 0.6136 | 0.6136 | | 0.3943 | 446.15 | 5800 | 0.8769 | 0.6086 | 0.6070 | | 0.3896 | 461.54 | 6000 | 0.8502 | 0.6191 | 0.6191 | | 0.3841 | 476.92 | 6200 | 0.8681 | 0.6133 | 0.6130 | | 0.3794 | 492.31 | 6400 | 0.8401 | 0.6119 | 0.6118 | | 0.3749 | 507.69 | 6600 | 0.8540 | 0.6155 | 0.6148 | | 0.3716 | 523.08 | 6800 | 0.8895 | 0.6144 | 0.6127 | | 0.3679 | 538.46 | 7000 | 0.8743 | 0.6091 | 0.6067 | | 0.3635 | 553.85 | 7200 | 0.8875 | 0.6170 | 0.6160 | | 0.3598 | 569.23 | 7400 | 0.8864 | 0.6142 | 0.6127 | | 0.3565 | 584.62 | 7600 | 0.8842 | 0.6133 | 0.6127 | | 0.3546 | 600.0 | 7800 | 0.8893 | 0.6171 | 0.6169 | | 0.3497 | 615.38 | 8000 | 0.8842 | 0.6160 | 0.6154 | | 0.3482 | 630.77 | 8200 | 0.8820 | 0.6138 | 0.6127 | | 0.3472 | 646.15 | 8400 | 0.8850 | 0.6162 | 0.6151 | | 0.3453 | 661.54 | 8600 | 0.8858 | 0.6157 | 0.6154 | | 0.3388 | 676.92 | 8800 | 0.9053 | 0.6171 | 0.6166 | | 0.3403 | 692.31 | 9000 | 0.8931 | 0.6134 | 0.6124 | | 0.3379 | 707.69 | 9200 | 0.8966 | 0.6175 | 0.6166 | | 0.3374 | 723.08 | 9400 | 0.8979 | 0.6136 | 0.6133 | | 0.3369 | 738.46 | 9600 | 0.8942 | 0.6170 | 0.6160 | | 0.3355 | 753.85 | 9800 | 0.8991 | 0.6153 | 0.6148 | | 0.3348 | 769.23 | 10000 | 0.9009 | 0.6154 | 0.6148 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T05:44:59+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_EMP\_H3K14ac-seqsight\_4096\_512\_15M-L32\_all =================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset. It achieves the following results on the evaluation set: * Loss: 0.6675 * F1 Score: 0.6076 * Accuracy: 0.6097 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-classification
transformers
# Model Card for Model ID This model is a fine-tuned version of bert-base-cased which is trained on the finance domain dataset. with an accuracy of 85%. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Abhijit. - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** it is trained on English language. - **License:** [More Information Needed] - **Finetuned from model [optional]:** bert-base-cased. ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses This model is trained on customer complaints. the complaints are segregated into various departments and the model is trained in the dataset when we put the complaint model can predict the result which can show the labels. <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AbhijitShejal/fin_bert_model
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:45:37+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID This model is a fine-tuned version of bert-base-cased which is trained on the finance domain dataset. with an accuracy of 85%. ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: Abhijit. - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): it is trained on English language. - License: - Finetuned from model [optional]: bert-base-cased. ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses This model is trained on customer complaints. the complaints are segregated into various departments and the model is trained in the dataset when we put the complaint model can predict the result which can show the labels. ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID\n\nThis model is a fine-tuned version of bert-base-cased which is trained on the finance domain dataset.\nwith an accuracy of 85%.", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Abhijit.\n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): it is trained on English language.\n- License: \n- Finetuned from model [optional]: bert-base-cased.", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses\n\nThis model is trained on customer complaints. the complaints are segregated into various departments and the model is trained in the dataset\nwhen we put the complaint model can predict the result which can show the labels.", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID\n\nThis model is a fine-tuned version of bert-base-cased which is trained on the finance domain dataset.\nwith an accuracy of 85%.", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Abhijit.\n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): it is trained on English language.\n- License: \n- Finetuned from model [optional]: bert-base-cased.", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses\n\nThis model is trained on customer complaints. the complaints are segregated into various departments and the model is trained in the dataset\nwhen we put the complaint model can predict the result which can show the labels.", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
Zangs3011/tester_123
null
[ "transformers", "safetensors", "opt", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T05:45:40+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #opt #text-generation #unsloth #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #opt #text-generation #unsloth #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1374 - F1: 0.8613 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2659 | 1.0 | 525 | 0.1523 | 0.8278 | | 0.1298 | 2.0 | 1050 | 0.1374 | 0.8540 | | 0.0807 | 3.0 | 1575 | 0.1374 | 0.8613 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]}
cogsci13/xlm-roberta-base-finetuned-panx-de
null
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:47:35+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-de ================================== This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1374 * F1: 0.8613 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
GalaganKV/Mistral-7B-Instruct-v0.2-MultiTask-v5
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T05:48:42+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # facebook_wav2vec2-xls-r-300m_meet_tr_p_10_30h This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5611 - Cer: 0.1334 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 7 - eval_batch_size: 56 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.2651 | 0.21 | 500 | 5.9159 | 1.0 | | 2.8965 | 0.43 | 1000 | 2.0493 | 0.4184 | | 1.3122 | 0.64 | 1500 | 1.0627 | 0.2433 | | 1.0682 | 0.85 | 2000 | 0.8674 | 0.2083 | | 0.9281 | 1.07 | 2500 | 0.7413 | 0.1749 | | 0.8103 | 1.28 | 3000 | 0.7032 | 0.1652 | | 0.7823 | 1.5 | 3500 | 0.6806 | 0.1616 | | 0.7429 | 1.71 | 4000 | 0.6430 | 0.1547 | | 0.7375 | 1.92 | 4500 | 0.6253 | 0.1533 | | 0.6299 | 2.14 | 5000 | 0.6375 | 0.1423 | | 0.5801 | 2.35 | 5500 | 0.6086 | 0.1398 | | 0.5735 | 2.56 | 6000 | 0.5808 | 0.1394 | | 0.5448 | 2.78 | 6500 | 0.5736 | 0.1351 | | 0.5555 | 2.99 | 7000 | 0.5611 | 0.1334 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "facebook_wav2vec2-xls-r-300m_meet_tr_p_10_30h", "results": []}]}
namkyeong/facebook_wav2vec2-xls-r-300m_meet_tr_p_10_30h
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:49:21+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
facebook\_wav2vec2-xls-r-300m\_meet\_tr\_p\_10\_30h =================================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.5611 * Cer: 0.1334 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 7 * eval\_batch\_size: 56 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0 * Pytorch 1.10.0+cu113 * Datasets 1.18.3 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 56\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 56\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
dahye1/lg-gemma-ko-7b
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:52:17+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ueken1219/bert-base-japanese-v3-wrime-sentiment
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:54:06+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
# SDXL-Turbo Model Card <!-- Provide a quick summary of what the model is/does. --> ![row01](output_tile.jpg) SDXL-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation. A real-time demo is available here: http://clipdrop.co/stable-diffusion-turbo Please note: For commercial use, please refer to https://stability.ai/membership. ## Model Details ### Model Description SDXL-Turbo is a distilled version of [SDXL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), trained for real-time synthesis. SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the [technical report](https://stability.ai/research/adversarial-diffusion-distillation)), which allows sampling large-scale foundational image diffusion models in 1 to 4 steps at high image quality. This approach uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal and combines this with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. - **Developed by:** Stability AI - **Funded by:** Stability AI - **Model type:** Generative text-to-image model - **Finetuned from model:** [SDXL 1.0 Base](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) ### Model Sources For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference). - **Repository:** https://github.com/Stability-AI/generative-models - **Paper:** https://stability.ai/research/adversarial-diffusion-distillation - **Demo:** http://clipdrop.co/stable-diffusion-turbo ## Evaluation ![comparison1](image_quality_one_step.png) ![comparison2](prompt_alignment_one_step.png) The charts above evaluate user preference for SDXL-Turbo over other single- and multi-step models. SDXL-Turbo evaluated at a single step is preferred by human voters in terms of image quality and prompt following over LCM-XL evaluated at four (or fewer) steps. In addition, we see that using four steps for SDXL-Turbo further improves performance. For details on the user study, we refer to the [research paper](https://stability.ai/research/adversarial-diffusion-distillation). ## Uses ### Direct Use The model is intended for both non-commercial and commercial usage. You can use this model for non-commercial or research purposes under this [license](https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT). Possible research areas and tasks include - Research on generative models. - Research on real-time applications of generative models. - Research on the impact of real-time generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. For commercial use, please refer to https://stability.ai/membership. Excluded uses are described below. ### Diffusers ``` pip install diffusers transformers accelerate --upgrade ``` - **Text-to-image**: SDXL-Turbo does not make use of `guidance_scale` or `negative_prompt`, we disable it with `guidance_scale=0.0`. Preferably, the model generates images of size 512x512 but higher image sizes work as well. A **single step** is enough to generate high quality images. ```py from diffusers import AutoPipelineForText2Image import torch pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") pipe.to("cuda") prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0] ``` - **Image-to-image**: When using SDXL-Turbo for image-to-image generation, make sure that `num_inference_steps` * `strength` is larger or equal to 1. The image-to-image pipeline will run for `int(num_inference_steps * strength)` steps, *e.g.* 0.5 * 2.0 = 1 step in our example below. ```py from diffusers import AutoPipelineForImage2Image from diffusers.utils import load_image import torch pipe = AutoPipelineForImage2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") pipe.to("cuda") init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png").resize((512, 512)) prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" image = pipe(prompt, image=init_image, num_inference_steps=2, strength=0.5, guidance_scale=0.0).images[0] ``` ### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy). ## Limitations and Bias ### Limitations - The generated images are of a fixed resolution (512x512 pix), and the model does not achieve perfect photorealism. - The model cannot render legible text. - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. ### Recommendations The model is intended for both non-commercial and commercial usage. ## How to Get Started with the Model Check out https://github.com/Stability-AI/generative-models
{"license": "other", "pipeline_tag": "text-to-image", "inference": false, "license_name": "sai-nc-community", "license_link": "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT"}
Fabrice-TIERCELIN/sdxl-turbo
null
[ "diffusers", "onnx", "safetensors", "text-to-image", "license:other", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-17T05:54:09+00:00
[]
[]
TAGS #diffusers #onnx #safetensors #text-to-image #license-other #diffusers-StableDiffusionXLPipeline #region-us
# SDXL-Turbo Model Card !row01 SDXL-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation. A real-time demo is available here: URL Please note: For commercial use, please refer to URL ## Model Details ### Model Description SDXL-Turbo is a distilled version of SDXL 1.0, trained for real-time synthesis. SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the technical report), which allows sampling large-scale foundational image diffusion models in 1 to 4 steps at high image quality. This approach uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal and combines this with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. - Developed by: Stability AI - Funded by: Stability AI - Model type: Generative text-to-image model - Finetuned from model: SDXL 1.0 Base ### Model Sources For research purposes, we recommend our 'generative-models' Github repository (URL which implements the most popular diffusion frameworks (both training and inference). - Repository: URL - Paper: URL - Demo: URL ## Evaluation !comparison1 !comparison2 The charts above evaluate user preference for SDXL-Turbo over other single- and multi-step models. SDXL-Turbo evaluated at a single step is preferred by human voters in terms of image quality and prompt following over LCM-XL evaluated at four (or fewer) steps. In addition, we see that using four steps for SDXL-Turbo further improves performance. For details on the user study, we refer to the research paper. ## Uses ### Direct Use The model is intended for both non-commercial and commercial usage. You can use this model for non-commercial or research purposes under this license. Possible research areas and tasks include - Research on generative models. - Research on real-time applications of generative models. - Research on the impact of real-time generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. For commercial use, please refer to URL Excluded uses are described below. ### Diffusers - Text-to-image: SDXL-Turbo does not make use of 'guidance_scale' or 'negative_prompt', we disable it with 'guidance_scale=0.0'. Preferably, the model generates images of size 512x512 but higher image sizes work as well. A single step is enough to generate high quality images. - Image-to-image: When using SDXL-Turbo for image-to-image generation, make sure that 'num_inference_steps' * 'strength' is larger or equal to 1. The image-to-image pipeline will run for 'int(num_inference_steps * strength)' steps, *e.g.* 0.5 * 2.0 = 1 step in our example below. ### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. The model should not be used in any way that violates Stability AI's Acceptable Use Policy. ## Limitations and Bias ### Limitations - The generated images are of a fixed resolution (512x512 pix), and the model does not achieve perfect photorealism. - The model cannot render legible text. - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. ### Recommendations The model is intended for both non-commercial and commercial usage. ## How to Get Started with the Model Check out URL
[ "# SDXL-Turbo Model Card\n\n\n!row01\nSDXL-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation.\nA real-time demo is available here: URL\n\nPlease note: For commercial use, please refer to URL", "## Model Details", "### Model Description\nSDXL-Turbo is a distilled version of SDXL 1.0, trained for real-time synthesis. \nSDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the technical report), which allows sampling large-scale foundational \nimage diffusion models in 1 to 4 steps at high image quality. \nThis approach uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal and combines this with an\nadversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. \n\n- Developed by: Stability AI\n- Funded by: Stability AI\n- Model type: Generative text-to-image model\n- Finetuned from model: SDXL 1.0 Base", "### Model Sources\n\nFor research purposes, we recommend our 'generative-models' Github repository (URL \nwhich implements the most popular diffusion frameworks (both training and inference).\n\n- Repository: URL\n- Paper: URL\n- Demo: URL", "## Evaluation\n!comparison1\n!comparison2\nThe charts above evaluate user preference for SDXL-Turbo over other single- and multi-step models.\nSDXL-Turbo evaluated at a single step is preferred by human voters in terms of image quality and prompt following over LCM-XL evaluated at four (or fewer) steps.\nIn addition, we see that using four steps for SDXL-Turbo further improves performance.\nFor details on the user study, we refer to the research paper.", "## Uses", "### Direct Use\n\nThe model is intended for both non-commercial and commercial usage. You can use this model for non-commercial or research purposes under this license. Possible research areas and tasks include\n\n- Research on generative models.\n- Research on real-time applications of generative models.\n- Research on the impact of real-time generative models.\n- Safe deployment of models which have the potential to generate harmful content.\n- Probing and understanding the limitations and biases of generative models.\n- Generation of artworks and use in design and other artistic processes.\n- Applications in educational or creative tools.\n\nFor commercial use, please refer to URL\n\nExcluded uses are described below.", "### Diffusers\n\n\n\n- Text-to-image:\n\nSDXL-Turbo does not make use of 'guidance_scale' or 'negative_prompt', we disable it with 'guidance_scale=0.0'.\nPreferably, the model generates images of size 512x512 but higher image sizes work as well.\nA single step is enough to generate high quality images.\n\n\n\n- Image-to-image:\n\nWhen using SDXL-Turbo for image-to-image generation, make sure that 'num_inference_steps' * 'strength' is larger or equal \nto 1. The image-to-image pipeline will run for 'int(num_inference_steps * strength)' steps, *e.g.* 0.5 * 2.0 = 1 step in our example \nbelow.", "### Out-of-Scope Use\n\nThe model was not trained to be factual or true representations of people or events, \nand therefore using the model to generate such content is out-of-scope for the abilities of this model.\nThe model should not be used in any way that violates Stability AI's Acceptable Use Policy.", "## Limitations and Bias", "### Limitations\n- The generated images are of a fixed resolution (512x512 pix), and the model does not achieve perfect photorealism.\n- The model cannot render legible text.\n- Faces and people in general may not be generated properly.\n- The autoencoding part of the model is lossy.", "### Recommendations\n\nThe model is intended for both non-commercial and commercial usage.", "## How to Get Started with the Model\n\nCheck out URL" ]
[ "TAGS\n#diffusers #onnx #safetensors #text-to-image #license-other #diffusers-StableDiffusionXLPipeline #region-us \n", "# SDXL-Turbo Model Card\n\n\n!row01\nSDXL-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation.\nA real-time demo is available here: URL\n\nPlease note: For commercial use, please refer to URL", "## Model Details", "### Model Description\nSDXL-Turbo is a distilled version of SDXL 1.0, trained for real-time synthesis. \nSDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the technical report), which allows sampling large-scale foundational \nimage diffusion models in 1 to 4 steps at high image quality. \nThis approach uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal and combines this with an\nadversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. \n\n- Developed by: Stability AI\n- Funded by: Stability AI\n- Model type: Generative text-to-image model\n- Finetuned from model: SDXL 1.0 Base", "### Model Sources\n\nFor research purposes, we recommend our 'generative-models' Github repository (URL \nwhich implements the most popular diffusion frameworks (both training and inference).\n\n- Repository: URL\n- Paper: URL\n- Demo: URL", "## Evaluation\n!comparison1\n!comparison2\nThe charts above evaluate user preference for SDXL-Turbo over other single- and multi-step models.\nSDXL-Turbo evaluated at a single step is preferred by human voters in terms of image quality and prompt following over LCM-XL evaluated at four (or fewer) steps.\nIn addition, we see that using four steps for SDXL-Turbo further improves performance.\nFor details on the user study, we refer to the research paper.", "## Uses", "### Direct Use\n\nThe model is intended for both non-commercial and commercial usage. You can use this model for non-commercial or research purposes under this license. Possible research areas and tasks include\n\n- Research on generative models.\n- Research on real-time applications of generative models.\n- Research on the impact of real-time generative models.\n- Safe deployment of models which have the potential to generate harmful content.\n- Probing and understanding the limitations and biases of generative models.\n- Generation of artworks and use in design and other artistic processes.\n- Applications in educational or creative tools.\n\nFor commercial use, please refer to URL\n\nExcluded uses are described below.", "### Diffusers\n\n\n\n- Text-to-image:\n\nSDXL-Turbo does not make use of 'guidance_scale' or 'negative_prompt', we disable it with 'guidance_scale=0.0'.\nPreferably, the model generates images of size 512x512 but higher image sizes work as well.\nA single step is enough to generate high quality images.\n\n\n\n- Image-to-image:\n\nWhen using SDXL-Turbo for image-to-image generation, make sure that 'num_inference_steps' * 'strength' is larger or equal \nto 1. The image-to-image pipeline will run for 'int(num_inference_steps * strength)' steps, *e.g.* 0.5 * 2.0 = 1 step in our example \nbelow.", "### Out-of-Scope Use\n\nThe model was not trained to be factual or true representations of people or events, \nand therefore using the model to generate such content is out-of-scope for the abilities of this model.\nThe model should not be used in any way that violates Stability AI's Acceptable Use Policy.", "## Limitations and Bias", "### Limitations\n- The generated images are of a fixed resolution (512x512 pix), and the model does not achieve perfect photorealism.\n- The model cannot render legible text.\n- Faces and people in general may not be generated properly.\n- The autoencoding part of the model is lossy.", "### Recommendations\n\nThe model is intended for both non-commercial and commercial usage.", "## How to Get Started with the Model\n\nCheck out URL" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me2-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset. It achieves the following results on the evaluation set: - Loss: 0.6456 - F1 Score: 0.6194 - Accuracy: 0.6233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6554 | 16.67 | 200 | 0.6497 | 0.6182 | 0.6246 | | 0.618 | 33.33 | 400 | 0.6670 | 0.5884 | 0.6051 | | 0.6002 | 50.0 | 600 | 0.6907 | 0.6014 | 0.6067 | | 0.586 | 66.67 | 800 | 0.6848 | 0.5822 | 0.6093 | | 0.5716 | 83.33 | 1000 | 0.7291 | 0.5926 | 0.5963 | | 0.5601 | 100.0 | 1200 | 0.6969 | 0.5926 | 0.5979 | | 0.5521 | 116.67 | 1400 | 0.7144 | 0.5909 | 0.6044 | | 0.5423 | 133.33 | 1600 | 0.7276 | 0.5811 | 0.6002 | | 0.5353 | 150.0 | 1800 | 0.7250 | 0.5956 | 0.6018 | | 0.5289 | 166.67 | 2000 | 0.7242 | 0.5864 | 0.5995 | | 0.5201 | 183.33 | 2200 | 0.7578 | 0.5972 | 0.6022 | | 0.5142 | 200.0 | 2400 | 0.7564 | 0.5962 | 0.6061 | | 0.5046 | 216.67 | 2600 | 0.7434 | 0.5919 | 0.5986 | | 0.4938 | 233.33 | 2800 | 0.7612 | 0.5877 | 0.5986 | | 0.4846 | 250.0 | 3000 | 0.7683 | 0.5865 | 0.5973 | | 0.476 | 266.67 | 3200 | 0.8137 | 0.5938 | 0.6012 | | 0.4666 | 283.33 | 3400 | 0.8254 | 0.5944 | 0.6015 | | 0.4552 | 300.0 | 3600 | 0.8067 | 0.5874 | 0.5888 | | 0.4485 | 316.67 | 3800 | 0.8205 | 0.5932 | 0.5969 | | 0.4413 | 333.33 | 4000 | 0.7966 | 0.5864 | 0.5924 | | 0.4334 | 350.0 | 4200 | 0.8377 | 0.5903 | 0.6025 | | 0.4245 | 366.67 | 4400 | 0.8366 | 0.5909 | 0.5930 | | 0.4181 | 383.33 | 4600 | 0.8468 | 0.5925 | 0.5960 | | 0.4113 | 400.0 | 4800 | 0.8622 | 0.5955 | 0.5989 | | 0.4044 | 416.67 | 5000 | 0.8693 | 0.5941 | 0.5973 | | 0.3992 | 433.33 | 5200 | 0.8761 | 0.5959 | 0.6025 | | 0.3934 | 450.0 | 5400 | 0.8944 | 0.5934 | 0.5986 | | 0.3877 | 466.67 | 5600 | 0.8938 | 0.5923 | 0.5963 | | 0.3823 | 483.33 | 5800 | 0.9055 | 0.5950 | 0.5973 | | 0.3769 | 500.0 | 6000 | 0.9218 | 0.5927 | 0.5940 | | 0.3709 | 516.67 | 6200 | 0.9273 | 0.5893 | 0.5911 | | 0.3669 | 533.33 | 6400 | 0.9430 | 0.5961 | 0.6002 | | 0.3622 | 550.0 | 6600 | 0.9245 | 0.5910 | 0.5927 | | 0.3559 | 566.67 | 6800 | 0.9351 | 0.5864 | 0.5885 | | 0.353 | 583.33 | 7000 | 0.9301 | 0.5931 | 0.5969 | | 0.3472 | 600.0 | 7200 | 0.9371 | 0.5969 | 0.6018 | | 0.345 | 616.67 | 7400 | 0.9465 | 0.5857 | 0.5865 | | 0.3411 | 633.33 | 7600 | 0.9423 | 0.5871 | 0.5898 | | 0.3376 | 650.0 | 7800 | 0.9643 | 0.5970 | 0.6018 | | 0.3345 | 666.67 | 8000 | 0.9649 | 0.5971 | 0.6018 | | 0.3324 | 683.33 | 8200 | 0.9729 | 0.5966 | 0.6031 | | 0.3292 | 700.0 | 8400 | 0.9684 | 0.5948 | 0.5992 | | 0.3271 | 716.67 | 8600 | 0.9733 | 0.5980 | 0.6028 | | 0.325 | 733.33 | 8800 | 0.9709 | 0.5945 | 0.5963 | | 0.3229 | 750.0 | 9000 | 0.9800 | 0.5926 | 0.5956 | | 0.321 | 766.67 | 9200 | 0.9863 | 0.5949 | 0.5992 | | 0.319 | 783.33 | 9400 | 0.9785 | 0.5926 | 0.5956 | | 0.3185 | 800.0 | 9600 | 0.9813 | 0.5935 | 0.5966 | | 0.3188 | 816.67 | 9800 | 0.9793 | 0.5954 | 0.5992 | | 0.3172 | 833.33 | 10000 | 0.9851 | 0.5952 | 0.5989 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T05:55:05+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_EMP\_H3K4me2-seqsight\_4096\_512\_15M-L32\_all =================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset. It achieves the following results on the evaluation set: * Loss: 0.6456 * F1 Score: 0.6194 * Accuracy: 0.6233 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # long-t5-local-base-finetuned-justification-v11 This model is a fine-tuned version of [google/long-t5-local-base](https://huggingface.co/google/long-t5-local-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3775 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-07 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 10.3775 | 1.0 | 676 | 10.3775 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.2.2+cu121 - Datasets 2.16.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/long-t5-local-base", "model-index": [{"name": "long-t5-local-base-finetuned-justification-v11", "results": []}]}
satyanshu404/long-t5-local-base-finetuned-justification-v11
null
[ "transformers", "safetensors", "longt5", "text2text-generation", "generated_from_trainer", "base_model:google/long-t5-local-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:56:27+00:00
[]
[]
TAGS #transformers #safetensors #longt5 #text2text-generation #generated_from_trainer #base_model-google/long-t5-local-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
long-t5-local-base-finetuned-justification-v11 ============================================== This model is a fine-tuned version of google/long-t5-local-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 10.3775 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-07 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.2.2+cu121 * Datasets 2.16.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-07\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #longt5 #text2text-generation #generated_from_trainer #base_model-google/long-t5-local-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-07\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_usp4_dpo5 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0392 - Rewards/chosen: -5.3723 - Rewards/rejected: -8.4336 - Rewards/accuracies: 0.6400 - Rewards/margins: 3.0613 - Logps/rejected: -127.9703 - Logps/chosen: -121.9222 - Logits/rejected: -0.8515 - Logits/chosen: -0.7928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0857 | 2.67 | 100 | 1.4158 | -6.1266 | -7.7828 | 0.5700 | 1.6562 | -126.6688 | -123.4309 | -0.2615 | -0.2304 | | 0.0373 | 5.33 | 200 | 2.0473 | -10.8748 | -14.0013 | 0.6400 | 3.1265 | -139.1058 | -132.9272 | -1.1231 | -1.1327 | | 0.0061 | 8.0 | 300 | 2.3674 | -11.1453 | -14.1832 | 0.5900 | 3.0378 | -139.4695 | -133.4684 | -0.8038 | -0.7431 | | 0.0004 | 10.67 | 400 | 2.0235 | -4.6284 | -7.5396 | 0.6500 | 2.9112 | -126.1823 | -120.4344 | -0.8446 | -0.7851 | | 0.0 | 13.33 | 500 | 2.0425 | -5.3605 | -8.3967 | 0.6400 | 3.0362 | -127.8966 | -121.8987 | -0.8512 | -0.7922 | | 0.0 | 16.0 | 600 | 2.0426 | -5.3772 | -8.4171 | 0.6400 | 3.0399 | -127.9373 | -121.9320 | -0.8517 | -0.7927 | | 0.0 | 18.67 | 700 | 2.0478 | -5.3866 | -8.4190 | 0.6400 | 3.0323 | -127.9411 | -121.9509 | -0.8520 | -0.7932 | | 0.0 | 21.33 | 800 | 2.0499 | -5.3884 | -8.4250 | 0.6400 | 3.0366 | -127.9531 | -121.9544 | -0.8517 | -0.7929 | | 0.0 | 24.0 | 900 | 2.0375 | -5.3727 | -8.4358 | 0.6400 | 3.0631 | -127.9748 | -121.9230 | -0.8519 | -0.7930 | | 0.0 | 26.67 | 1000 | 2.0392 | -5.3723 | -8.4336 | 0.6400 | 3.0613 | -127.9703 | -121.9222 | -0.8515 | -0.7928 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_usp4_dpo5", "results": []}]}
guoyu-zhang/model_usp4_dpo5
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-17T05:57:39+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
model\_usp4\_dpo5 ================= This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.0392 * Rewards/chosen: -5.3723 * Rewards/rejected: -8.4336 * Rewards/accuracies: 0.6400 * Rewards/margins: 3.0613 * Logps/rejected: -127.9703 * Logps/chosen: -121.9222 * Logits/rejected: -0.8515 * Logits/chosen: -0.7928 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 4 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 100 * training\_steps: 1000 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/LlamaAdapter-llama2-happy-300-prompt-system0.03
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T05:58:21+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/dumbo-krillin33
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T05:59:17+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K9ac-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.7761 - F1 Score: 0.6415 - Accuracy: 0.6405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1536 - eval_batch_size: 1536 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6506 | 13.33 | 200 | 0.6351 | 0.6278 | 0.6272 | | 0.5996 | 26.67 | 400 | 0.6395 | 0.6254 | 0.6247 | | 0.5815 | 40.0 | 600 | 0.6488 | 0.6176 | 0.6171 | | 0.5671 | 53.33 | 800 | 0.6616 | 0.6174 | 0.6171 | | 0.5516 | 66.67 | 1000 | 0.6911 | 0.6013 | 0.6107 | | 0.5381 | 80.0 | 1200 | 0.6763 | 0.6220 | 0.6214 | | 0.5249 | 93.33 | 1400 | 0.6934 | 0.6186 | 0.6182 | | 0.5126 | 106.67 | 1600 | 0.7236 | 0.6188 | 0.6182 | | 0.505 | 120.0 | 1800 | 0.7130 | 0.6164 | 0.6160 | | 0.4966 | 133.33 | 2000 | 0.7272 | 0.6077 | 0.6081 | | 0.4882 | 146.67 | 2200 | 0.7201 | 0.6177 | 0.6171 | | 0.4789 | 160.0 | 2400 | 0.7228 | 0.6194 | 0.6196 | | 0.4712 | 173.33 | 2600 | 0.7515 | 0.6248 | 0.6240 | | 0.4624 | 186.67 | 2800 | 0.7669 | 0.6211 | 0.6204 | | 0.4548 | 200.0 | 3000 | 0.7630 | 0.6265 | 0.6258 | | 0.4455 | 213.33 | 3200 | 0.7951 | 0.6243 | 0.6236 | | 0.4369 | 226.67 | 3400 | 0.7922 | 0.6204 | 0.6196 | | 0.4299 | 240.0 | 3600 | 0.8258 | 0.6301 | 0.6294 | | 0.4205 | 253.33 | 3800 | 0.8276 | 0.6190 | 0.6182 | | 0.4137 | 266.67 | 4000 | 0.8456 | 0.6190 | 0.6182 | | 0.407 | 280.0 | 4200 | 0.8551 | 0.6118 | 0.6110 | | 0.4013 | 293.33 | 4400 | 0.8433 | 0.6137 | 0.6132 | | 0.3954 | 306.67 | 4600 | 0.8712 | 0.6178 | 0.6171 | | 0.3877 | 320.0 | 4800 | 0.8720 | 0.6175 | 0.6171 | | 0.3834 | 333.33 | 5000 | 0.8709 | 0.6218 | 0.6211 | | 0.3788 | 346.67 | 5200 | 0.8778 | 0.6196 | 0.6189 | | 0.372 | 360.0 | 5400 | 0.9094 | 0.6166 | 0.6160 | | 0.368 | 373.33 | 5600 | 0.8775 | 0.6182 | 0.6175 | | 0.3636 | 386.67 | 5800 | 0.8763 | 0.6093 | 0.6085 | | 0.3586 | 400.0 | 6000 | 0.8784 | 0.6136 | 0.6128 | | 0.3544 | 413.33 | 6200 | 0.8748 | 0.6207 | 0.6200 | | 0.3511 | 426.67 | 6400 | 0.8912 | 0.6160 | 0.6153 | | 0.3454 | 440.0 | 6600 | 0.8958 | 0.6129 | 0.6121 | | 0.3414 | 453.33 | 6800 | 0.9208 | 0.6204 | 0.6196 | | 0.3409 | 466.67 | 7000 | 0.8944 | 0.6134 | 0.6128 | | 0.3366 | 480.0 | 7200 | 0.9156 | 0.6118 | 0.6110 | | 0.3336 | 493.33 | 7400 | 0.9184 | 0.6128 | 0.6121 | | 0.3305 | 506.67 | 7600 | 0.9155 | 0.6093 | 0.6085 | | 0.3287 | 520.0 | 7800 | 0.9289 | 0.6165 | 0.6157 | | 0.3256 | 533.33 | 8000 | 0.9468 | 0.6114 | 0.6114 | | 0.3241 | 546.67 | 8200 | 0.9266 | 0.6118 | 0.6110 | | 0.3219 | 560.0 | 8400 | 0.9336 | 0.6111 | 0.6103 | | 0.319 | 573.33 | 8600 | 0.9351 | 0.6107 | 0.6099 | | 0.3182 | 586.67 | 8800 | 0.9294 | 0.6132 | 0.6125 | | 0.3168 | 600.0 | 9000 | 0.9435 | 0.6122 | 0.6114 | | 0.3148 | 613.33 | 9200 | 0.9387 | 0.6089 | 0.6081 | | 0.3138 | 626.67 | 9400 | 0.9408 | 0.6068 | 0.6060 | | 0.3126 | 640.0 | 9600 | 0.9489 | 0.6090 | 0.6081 | | 0.3106 | 653.33 | 9800 | 0.9471 | 0.6061 | 0.6053 | | 0.311 | 666.67 | 10000 | 0.9436 | 0.6079 | 0.6071 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T06:00:15+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_EMP\_H3K9ac-seqsight\_4096\_512\_15M-L32\_all ================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset. It achieves the following results on the evaluation set: * Loss: 0.7761 * F1 Score: 0.6415 * Accuracy: 0.6405 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 1536 * eval\_batch\_size: 1536 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
automatic-speech-recognition
transformers
# Whisper-lithuanian Whisper-lithuanian is a finetuned model for automatic speech recognition (ASR). Trained on 2 hours with filtered only lithuanian language mozilla-foundation/common_voice_13_0 dataset. # trining log [9309/9309 1:53:12, Epoch 3/3] |Epoch |Training Loss |Validation Loss | |----------|-----------------|-----------------| |1 |0.030600 |0.034302 | |2 |0.013200 |0.030458 | |3 |0.004100 |0.029847 | # Whisper model list | Size | Parameters | English-only | Multilingual | |----------|------------|------------------------------------------------------|-----------------------------------------------------| | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) | | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) | | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) | | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) | | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) | | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) | | large-v3 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v3) | The original code repository can be found [here](https://github.com/openai/whisper). ## Usage ```python import whisper from transformers import pipeline model_name = "Aismantas/whisper-base-lithuanian" asr_pipeline = pipeline("automatic-speech-recognition", model=model_name) # Assuming the file is named 'audio.wav' audio_file = "example_1.wav" # Run the transcription transcription = asr_pipeline(audio_file) print(transcription) ``` ## Fine-Tuning The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However, its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step guide to fine-tuning the Whisper model with as little as 5 hours of labelled data. ### Evaluated Use The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research. The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them. In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes. ## Training Data The models are trained on 1 million hours of weakly labeled audio and 4 million hours of pseudolabeled audio collected using Whisper `large-v2`. As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language. ## Performance and Limitations Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level. However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself. Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf). In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages. ## Broader Implications We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications. There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
{"language": ["lt"], "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition"], "datasets": ["mozilla-foundation/common_voice_13_0"], "pipeline_tag": "automatic-speech-recognition"}
Aismantas/whisper-base-lithuanian
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "audio", "lt", "dataset:mozilla-foundation/common_voice_13_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:00:27+00:00
[]
[ "lt" ]
TAGS #transformers #safetensors #whisper #automatic-speech-recognition #audio #lt #dataset-mozilla-foundation/common_voice_13_0 #license-apache-2.0 #endpoints_compatible #region-us
Whisper-lithuanian ================== Whisper-lithuanian is a finetuned model for automatic speech recognition (ASR). Trained on 2 hours with filtered only lithuanian language mozilla-foundation/common\_voice\_13\_0 dataset. trining log =========== [9309/9309 1:53:12, Epoch 3/3] Epoch: 1, Training Loss: 0.030600, Validation Loss: 0.034302 Epoch: 2, Training Loss: 0.013200, Validation Loss: 0.030458 Epoch: 3, Training Loss: 0.004100, Validation Loss: 0.029847 Whisper model list ================== The original code repository can be found here. Usage ----- Fine-Tuning ----------- The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However, its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog post Fine-Tune Whisper with Transformers provides a step-by-step guide to fine-tuning the Whisper model with as little as 5 hours of labelled data. ### Evaluated Use The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research. The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them. In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes. Training Data ------------- The models are trained on 1 million hours of weakly labeled audio and 4 million hours of pseudolabeled audio collected using Whisper 'large-v2'. As discussed in the accompanying paper, we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language. Performance and Limitations --------------------------- Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level. However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself. Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in the paper accompanying this release. In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in the paper. It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages. Broader Implications -------------------- We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications. There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
[ "### Evaluated Use\n\n\nThe primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.\n\n\nThe models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.\n\n\nIn particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.\n\n\nTraining Data\n-------------\n\n\nThe models are trained on 1 million hours of weakly labeled audio and 4 million hours of pseudolabeled audio collected using Whisper 'large-v2'.\n\n\nAs discussed in the accompanying paper, we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.\n\n\nPerformance and Limitations\n---------------------------\n\n\nOur studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.\n\n\nHowever, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.\n\n\nOur models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in the paper accompanying this release.\n\n\nIn addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in the paper. It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.\n\n\nBroader Implications\n--------------------\n\n\nWe anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.\n\n\nThere are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects." ]
[ "TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #audio #lt #dataset-mozilla-foundation/common_voice_13_0 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Evaluated Use\n\n\nThe primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.\n\n\nThe models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.\n\n\nIn particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.\n\n\nTraining Data\n-------------\n\n\nThe models are trained on 1 million hours of weakly labeled audio and 4 million hours of pseudolabeled audio collected using Whisper 'large-v2'.\n\n\nAs discussed in the accompanying paper, we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.\n\n\nPerformance and Limitations\n---------------------------\n\n\nOur studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.\n\n\nHowever, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.\n\n\nOur models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in the paper accompanying this release.\n\n\nIn addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in the paper. It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.\n\n\nBroader Implications\n--------------------\n\n\nWe anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.\n\n\nThere are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects." ]
text-generation
transformers
# microsoft/rho-math-7b-v0.1 AWQ - Model creator: [microsoft](https://huggingface.co/microsoft) - Original model: [rho-math-1b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) ## Model summary Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
{"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["mistral", "4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "chatml", "nlp", "math"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/rho-math-1b-interpreter-v0.1-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "mistral", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "chatml", "nlp", "math", "en", "license:mit", "text-generation-inference", "region:us" ]
null
2024-04-17T06:01:13+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #mistral #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #nlp #math #en #license-mit #text-generation-inference #region-us
# microsoft/rho-math-7b-v0.1 AWQ - Model creator: microsoft - Original model: rho-math-1b-interpreter-v0.1 ## Model summary Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
[ "# microsoft/rho-math-7b-v0.1 AWQ\n\n- Model creator: microsoft\n- Original model: rho-math-1b-interpreter-v0.1", "## Model summary\n\nRho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mistral #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #nlp #math #en #license-mit #text-generation-inference #region-us \n", "# microsoft/rho-math-7b-v0.1 AWQ\n\n- Model creator: microsoft\n- Original model: rho-math-1b-interpreter-v0.1", "## Model summary\n\nRho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution." ]
text-generation
transformers
# 🔎 Taiwan-inquiry_7B_v_2.1 <!-- Provide a quick summary of what the model is/does. --> "The model was fine-tuned based on the **Breeze-7B-Instruct-v1_0** model using a dataset that includes 614 authentic dialogues from the National Cheng Kung University Hospital. Additionally, 336 synthetic dialogues were included in the training set, carefully crafted to encompass themes drawn from sample questions of the OSCE (臨床技能測驗) sample questions in Taiwan. These synthetic dialogues were generated using GPT-3.5, Geminio-Pro and Breexe-8x7B-Instruct-v0_1. The training process was geared towards simulating verbal exchanges between doctors and patients within a hospital environment." **************************** **Updates** **************************** * 2024/04/25 🎉 Released [Taiwan-inquiry_7B_v2.1-awq](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1-awq) * 2024/04/29 🎉 Released [Taiwan-inquiry_7B_v2.1.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf) ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [Joseph (Chen-Wei) Li](https://www.linkedin.com/in/joseph-li-3a453b231/), researcher assistant from National Taiwan University Hospital. - **Model type:** A 7B parameter GPT-like model fine-tuned on a combination of private and synthetic dialogue datasets. - **Language(s) (NLP):** Traditional Chinese (zh-tw) - **Finetuned from model :** [Breeze-7B-Instruct-v1_0 ](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) ### Usage of the model - The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient. - You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt. **(using my patient generator: [**colab**](https://colab.research.google.com/drive/17MSob_tQ2hPtMBL0xOF2zzV6WWe4dEG6?usp=sharing))** - Directly asking the certain disease about the symptoms and the possible therapies.**(Warning: It's not medical advice!)** ### Model evaluation The model got the **TMMLU+** (0 shot) performance using [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) with default settings. |Details on TMMLU+ (0 shot):<br/>Model | Base Model | STEM | Social Science | Humanities | Other | AVG | |-----------------------------------------------------|:---------------------:|:---------------:|:--------------:|:----------:|:----------:|:-------:| | Taiwan-inquiry_7B_v2.1 |Breeze-7B-Instruct-v1_0| 36.06 | 44.61 | 37.49 | 39.61 | 40.29 | | Taiwan-inquiry_7B_v2.0 |Breeze-7B-Instruct-v0_1| 36.17 | 43.59 | 35.45 | 37.63 | 38.95 | | Taiwan-inquiry_7B_v1.0 |Taiwan-LLM-7B-v2.1-chat| 26.74 | 29.47 | 26.83 | 29.61 | 28.83 | ### DEMO ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c07d1b2357c1bded7a92fa/hZpp2SGb0iBYdLJvFvDqZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c07d1b2357c1bded7a92fa/KIKYt8xfLOz9bhiUYzS2Q.png)
{"language": ["zh"], "license": "apache-2.0", "library_name": "transformers", "tags": ["Doctor_consultation", "Taiwan", "fine-tuning", "medicine"], "pipeline_tag": "text-generation"}
ChenWeiLi/Taiwan-inquiry_7B_v2.1
null
[ "transformers", "safetensors", "mistral", "text-generation", "Doctor_consultation", "Taiwan", "fine-tuning", "medicine", "conversational", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T06:02:25+00:00
[]
[ "zh" ]
TAGS #transformers #safetensors #mistral #text-generation #Doctor_consultation #Taiwan #fine-tuning #medicine #conversational #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Taiwan-inquiry\_7B\_v\_2.1 ========================== "The model was fine-tuned based on the Breeze-7B-Instruct-v1\_0 model using a dataset that includes 614 authentic dialogues from the National Cheng Kung University Hospital. Additionally, 336 synthetic dialogues were included in the training set, carefully crafted to encompass themes drawn from sample questions of the OSCE (臨床技能測驗) sample questions in Taiwan. These synthetic dialogues were generated using GPT-3.5, Geminio-Pro and Breexe-8x7B-Instruct-v0\_1. The training process was geared towards simulating verbal exchanges between doctors and patients within a hospital environment." Updates * 2024/04/25 Released Taiwan-inquiry\_7B\_v2.1-awq * 2024/04/29 Released Taiwan-inquiry\_7B\_v2.1.gguf ### Model Description * Developed by: Joseph (Chen-Wei) Li, researcher assistant from National Taiwan University Hospital. * Model type: A 7B parameter GPT-like model fine-tuned on a combination of private and synthetic dialogue datasets. * Language(s) (NLP): Traditional Chinese (zh-tw) * Finetuned from model : Breeze-7B-Instruct-v1\_0 ### Usage of the model * The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient. * You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt. (using my patient generator: colab) * Directly asking the certain disease about the symptoms and the possible therapies.(Warning: It's not medical advice!) ### Model evaluation The model got the TMMLU+ (0 shot) performance using EleutherAI/lm-evaluation-harness with default settings. ### DEMO !image/png !image/png
[ "### Model Description\n\n\n* Developed by: Joseph (Chen-Wei) Li, researcher assistant from National Taiwan University Hospital.\n* Model type: A 7B parameter GPT-like model fine-tuned on a combination of private and synthetic dialogue datasets.\n* Language(s) (NLP): Traditional Chinese (zh-tw)\n* Finetuned from model : Breeze-7B-Instruct-v1\\_0", "### Usage of the model\n\n\n* The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient.\n* You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt. (using my patient generator: colab)\n* Directly asking the certain disease about the symptoms and the possible therapies.(Warning: It's not medical advice!)", "### Model evaluation\n\n\nThe model got the TMMLU+ (0 shot) performance using EleutherAI/lm-evaluation-harness with default settings.", "### DEMO\n\n\n!image/png\n!image/png" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #Doctor_consultation #Taiwan #fine-tuning #medicine #conversational #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Model Description\n\n\n* Developed by: Joseph (Chen-Wei) Li, researcher assistant from National Taiwan University Hospital.\n* Model type: A 7B parameter GPT-like model fine-tuned on a combination of private and synthetic dialogue datasets.\n* Language(s) (NLP): Traditional Chinese (zh-tw)\n* Finetuned from model : Breeze-7B-Instruct-v1\\_0", "### Usage of the model\n\n\n* The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient.\n* You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt. (using my patient generator: colab)\n* Directly asking the certain disease about the symptoms and the possible therapies.(Warning: It's not medical advice!)", "### Model evaluation\n\n\nThe model got the TMMLU+ (0 shot) performance using EleutherAI/lm-evaluation-harness with default settings.", "### DEMO\n\n\n!image/png\n!image/png" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Prahas10/roof-sample This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0745 - Validation Loss: 0.3751 - Train Accuracy: 0.9036 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1930, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.0001} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.4213 | 2.1960 | 0.3976 | 0 | | 1.4980 | 1.6884 | 0.5542 | 1 | | 0.8368 | 1.2810 | 0.6386 | 2 | | 0.4281 | 1.0031 | 0.7108 | 3 | | 0.2528 | 0.9361 | 0.7108 | 4 | | 0.1838 | 1.1263 | 0.6145 | 5 | | 0.4895 | 0.7904 | 0.8072 | 6 | | 0.1850 | 0.5048 | 0.9277 | 7 | | 0.0928 | 0.3762 | 0.9277 | 8 | | 0.0745 | 0.3751 | 0.9036 | 9 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google/vit-base-patch16-384", "model-index": [{"name": "Prahas10/roof-sample", "results": []}]}
Prahas10/roof-sample
null
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-384", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:02:45+00:00
[]
[]
TAGS #transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch16-384 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Prahas10/roof-sample ==================== This model is a fine-tuned version of google/vit-base-patch16-384 on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0745 * Validation Loss: 0.3751 * Train Accuracy: 0.9036 * Epoch: 9 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 3e-05, 'decay\_steps': 1930, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.0001} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.38.2 * TensorFlow 2.15.0 * Datasets 2.16.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 1930, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.0001}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch16-384 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 1930, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.0001}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Dinesh1634/gemma-Chat-friendChat-Finetune-model
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T06:03:29+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yxs33220/llama-2-7b-model-0417
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T06:04:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# microsoft/rho-math-1b-v0.1 AWQ - Model creator: [microsoft](https://huggingface.co/microsoft) - Original model: [rho-math-1b-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) ## Model Summary Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
{"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["mistral", "4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "chatml", "nlp", "math"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/rho-math-1b-v0.1-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "mistral", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "chatml", "nlp", "math", "en", "license:mit", "text-generation-inference", "region:us" ]
null
2024-04-17T06:07:07+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #mistral #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #nlp #math #en #license-mit #text-generation-inference #region-us
# microsoft/rho-math-1b-v0.1 AWQ - Model creator: microsoft - Original model: rho-math-1b-v0.1 ## Model Summary Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
[ "# microsoft/rho-math-1b-v0.1 AWQ\n\n- Model creator: microsoft\n- Original model: rho-math-1b-v0.1", "## Model Summary\n\nRho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mistral #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #nlp #math #en #license-mit #text-generation-inference #region-us \n", "# microsoft/rho-math-1b-v0.1 AWQ\n\n- Model creator: microsoft\n- Original model: rho-math-1b-v0.1", "## Model Summary\n\nRho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-2b-chatml This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "google/gemma-2b-it", "model-index": [{"name": "gemma-2b-chatml", "results": []}]}
DuongTrongChi/gemma-2b-chatml
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:google/gemma-2b-it", "license:gemma", "region:us" ]
null
2024-04-17T06:07:13+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b-it #license-gemma #region-us
# gemma-2b-chatml This model is a fine-tuned version of google/gemma-2b-it on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "# gemma-2b-chatml\n\nThis model is a fine-tuned version of google/gemma-2b-it on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b-it #license-gemma #region-us \n", "# gemma-2b-chatml\n\nThis model is a fine-tuned version of google/gemma-2b-it on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
dachieu/wikivn-mistral
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T06:07:50+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
heyllm234/sc33
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:09:20+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1618 - F1: 0.8587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2775 | 1.0 | 715 | 0.1782 | 0.8212 | | 0.1479 | 2.0 | 1430 | 0.1573 | 0.8466 | | 0.0949 | 3.0 | 2145 | 0.1618 | 0.8587 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de-fr", "results": []}]}
cogsci13/xlm-roberta-base-finetuned-panx-de-fr
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:10:16+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-de-fr ===================================== This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1618 * F1: 0.8587 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me3-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.6795 - F1 Score: 0.5465 - Accuracy: 0.5709 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.685 | 13.33 | 200 | 0.6820 | 0.5590 | 0.5791 | | 0.6538 | 26.67 | 400 | 0.6989 | 0.5544 | 0.5546 | | 0.6371 | 40.0 | 600 | 0.7035 | 0.5552 | 0.5552 | | 0.6231 | 53.33 | 800 | 0.7242 | 0.5528 | 0.5524 | | 0.6118 | 66.67 | 1000 | 0.7434 | 0.5534 | 0.5530 | | 0.6028 | 80.0 | 1200 | 0.7386 | 0.5455 | 0.5454 | | 0.5947 | 93.33 | 1400 | 0.7431 | 0.5558 | 0.5568 | | 0.5874 | 106.67 | 1600 | 0.7410 | 0.5483 | 0.5481 | | 0.5813 | 120.0 | 1800 | 0.7644 | 0.5498 | 0.5538 | | 0.575 | 133.33 | 2000 | 0.7577 | 0.5549 | 0.5546 | | 0.5703 | 146.67 | 2200 | 0.7737 | 0.5530 | 0.5541 | | 0.564 | 160.0 | 2400 | 0.7792 | 0.5514 | 0.5579 | | 0.5576 | 173.33 | 2600 | 0.7707 | 0.5508 | 0.5557 | | 0.5498 | 186.67 | 2800 | 0.7854 | 0.5490 | 0.5486 | | 0.542 | 200.0 | 3000 | 0.7822 | 0.5502 | 0.5519 | | 0.5328 | 213.33 | 3200 | 0.8117 | 0.5511 | 0.5508 | | 0.5266 | 226.67 | 3400 | 0.8000 | 0.5507 | 0.5519 | | 0.519 | 240.0 | 3600 | 0.8066 | 0.5490 | 0.5505 | | 0.5112 | 253.33 | 3800 | 0.8253 | 0.5455 | 0.5467 | | 0.5037 | 266.67 | 4000 | 0.8472 | 0.5502 | 0.5511 | | 0.4982 | 280.0 | 4200 | 0.8444 | 0.5537 | 0.5557 | | 0.4911 | 293.33 | 4400 | 0.8521 | 0.5510 | 0.5522 | | 0.4837 | 306.67 | 4600 | 0.8540 | 0.5450 | 0.5451 | | 0.4791 | 320.0 | 4800 | 0.8318 | 0.5520 | 0.5516 | | 0.4721 | 333.33 | 5000 | 0.8689 | 0.5471 | 0.5508 | | 0.4664 | 346.67 | 5200 | 0.8779 | 0.5537 | 0.5533 | | 0.4602 | 360.0 | 5400 | 0.8672 | 0.5503 | 0.5505 | | 0.4553 | 373.33 | 5600 | 0.9015 | 0.5534 | 0.5535 | | 0.4491 | 386.67 | 5800 | 0.8863 | 0.5501 | 0.5503 | | 0.4435 | 400.0 | 6000 | 0.8874 | 0.5476 | 0.5473 | | 0.4395 | 413.33 | 6200 | 0.8964 | 0.5539 | 0.5538 | | 0.4336 | 426.67 | 6400 | 0.8894 | 0.5511 | 0.5514 | | 0.4298 | 440.0 | 6600 | 0.9134 | 0.5518 | 0.5519 | | 0.4259 | 453.33 | 6800 | 0.9244 | 0.5543 | 0.5546 | | 0.4222 | 466.67 | 7000 | 0.9253 | 0.5507 | 0.5505 | | 0.418 | 480.0 | 7200 | 0.9027 | 0.5483 | 0.5486 | | 0.4129 | 493.33 | 7400 | 0.9208 | 0.5499 | 0.55 | | 0.4109 | 506.67 | 7600 | 0.9495 | 0.5523 | 0.5527 | | 0.4074 | 520.0 | 7800 | 0.9354 | 0.5526 | 0.5533 | | 0.4047 | 533.33 | 8000 | 0.9399 | 0.5528 | 0.5535 | | 0.4026 | 546.67 | 8200 | 0.9254 | 0.5493 | 0.5492 | | 0.4004 | 560.0 | 8400 | 0.9491 | 0.5512 | 0.5511 | | 0.3969 | 573.33 | 8600 | 0.9514 | 0.5496 | 0.5503 | | 0.3954 | 586.67 | 8800 | 0.9618 | 0.5501 | 0.5503 | | 0.3927 | 600.0 | 9000 | 0.9574 | 0.5504 | 0.5505 | | 0.3912 | 613.33 | 9200 | 0.9582 | 0.5488 | 0.5489 | | 0.3897 | 626.67 | 9400 | 0.9606 | 0.5464 | 0.5470 | | 0.3886 | 640.0 | 9600 | 0.9605 | 0.5483 | 0.5486 | | 0.3878 | 653.33 | 9800 | 0.9577 | 0.5460 | 0.5462 | | 0.3882 | 666.67 | 10000 | 0.9575 | 0.5461 | 0.5465 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T06:10:40+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_EMP\_H3K4me3-seqsight\_4096\_512\_15M-L32\_all =================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.6795 * F1 Score: 0.5465 * Accuracy: 0.5709 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kunalchamoli/test
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:10:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Gaivoronsky/Mistral-7B-Saiga](https://huggingface.co/Gaivoronsky/Mistral-7B-Saiga) as a base. ### Models Merged The following models were included in the merge: * [HuggingFaceH4/mistral-7b-grok](https://huggingface.co/HuggingFaceH4/mistral-7b-grok) * [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Gaivoronsky/Mistral-7B-Saiga layer_range: - 0 - 32 - model: HuggingFaceH4/mistral-7b-grok layer_range: - 0 - 32 - model: NousResearch/Yarn-Mistral-7b-128k layer_range: - 0 - 32 merge_method: model_stock base_model: Gaivoronsky/Mistral-7B-Saiga parameters: t: - filter: self_attn value: - 0 - 0.5 - 0.3 - 0.7 - 1 - filter: mlp value: - 1 - 0.5 - 0.7 - 0.3 - 0 - filter: mlp value: - 1 - 0.5 - 0.7 - 0.3 - 0 - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Gaivoronsky/Mistral-7B-Saiga", "HuggingFaceH4/mistral-7b-grok", "NousResearch/Yarn-Mistral-7b-128k"]}
ehristoforu/0001
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2403.19522", "base_model:Gaivoronsky/Mistral-7B-Saiga", "base_model:HuggingFaceH4/mistral-7b-grok", "base_model:NousResearch/Yarn-Mistral-7b-128k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T06:11:32+00:00
[ "2403.19522" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-Gaivoronsky/Mistral-7B-Saiga #base_model-HuggingFaceH4/mistral-7b-grok #base_model-NousResearch/Yarn-Mistral-7b-128k #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the Model Stock merge method using Gaivoronsky/Mistral-7B-Saiga as a base. ### Models Merged The following models were included in the merge: * HuggingFaceH4/mistral-7b-grok * NousResearch/Yarn-Mistral-7b-128k ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the Model Stock merge method using Gaivoronsky/Mistral-7B-Saiga as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* HuggingFaceH4/mistral-7b-grok\n* NousResearch/Yarn-Mistral-7b-128k", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-Gaivoronsky/Mistral-7B-Saiga #base_model-HuggingFaceH4/mistral-7b-grok #base_model-NousResearch/Yarn-Mistral-7b-128k #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the Model Stock merge method using Gaivoronsky/Mistral-7B-Saiga as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* HuggingFaceH4/mistral-7b-grok\n* NousResearch/Yarn-Mistral-7b-128k", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 40 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "NousResearch/Llama-2-7b-chat-hf", "model-index": [{"name": "results", "results": []}]}
devsomesh/results
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-17T06:11:40+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-NousResearch/Llama-2-7b-chat-hf #region-us
# results This model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 40 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
[ "# results\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 40", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.0" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-NousResearch/Llama-2-7b-chat-hf #region-us \n", "# results\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 40", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.0" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset. It achieves the following results on the evaluation set: - Loss: 0.9598 - F1 Score: 0.7788 - Accuracy: 0.7789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:| | 0.5593 | 33.33 | 200 | 0.4966 | 0.7758 | 0.7769 | | 0.4288 | 66.67 | 400 | 0.5214 | 0.7613 | 0.7604 | | 0.3906 | 100.0 | 600 | 0.5288 | 0.7744 | 0.7741 | | 0.3513 | 133.33 | 800 | 0.5683 | 0.7685 | 0.7693 | | 0.3185 | 166.67 | 1000 | 0.6035 | 0.7737 | 0.7741 | | 0.2903 | 200.0 | 1200 | 0.6329 | 0.7692 | 0.7687 | | 0.2654 | 233.33 | 1400 | 0.6387 | 0.7751 | 0.7748 | | 0.2427 | 266.67 | 1600 | 0.7132 | 0.7673 | 0.7687 | | 0.2244 | 300.0 | 1800 | 0.7433 | 0.7708 | 0.7714 | | 0.2072 | 333.33 | 2000 | 0.7907 | 0.7792 | 0.7796 | | 0.1935 | 366.67 | 2200 | 0.8020 | 0.7738 | 0.7741 | | 0.1808 | 400.0 | 2400 | 0.8036 | 0.7692 | 0.7693 | | 0.1672 | 433.33 | 2600 | 0.8224 | 0.7688 | 0.7687 | | 0.1579 | 466.67 | 2800 | 0.8690 | 0.7703 | 0.7707 | | 0.1504 | 500.0 | 3000 | 0.8927 | 0.7726 | 0.7721 | | 0.1417 | 533.33 | 3200 | 0.8674 | 0.7621 | 0.7632 | | 0.1362 | 566.67 | 3400 | 0.9234 | 0.7748 | 0.7748 | | 0.1282 | 600.0 | 3600 | 0.9192 | 0.7698 | 0.7714 | | 0.1221 | 633.33 | 3800 | 0.9024 | 0.7616 | 0.7625 | | 0.1169 | 666.67 | 4000 | 0.9320 | 0.7687 | 0.7687 | | 0.1119 | 700.0 | 4200 | 0.9804 | 0.7642 | 0.7652 | | 0.1068 | 733.33 | 4400 | 0.9783 | 0.7656 | 0.7666 | | 0.1035 | 766.67 | 4600 | 1.0196 | 0.7801 | 0.7803 | | 0.0986 | 800.0 | 4800 | 0.9935 | 0.7737 | 0.7741 | | 0.0952 | 833.33 | 5000 | 1.0153 | 0.7712 | 0.7721 | | 0.0927 | 866.67 | 5200 | 1.0372 | 0.7731 | 0.7734 | | 0.09 | 900.0 | 5400 | 1.0296 | 0.7742 | 0.7741 | | 0.0879 | 933.33 | 5600 | 1.0426 | 0.7690 | 0.7693 | | 0.0849 | 966.67 | 5800 | 1.0173 | 0.7682 | 0.7680 | | 0.0826 | 1000.0 | 6000 | 1.0551 | 0.7744 | 0.7748 | | 0.0804 | 1033.33 | 6200 | 1.0605 | 0.7638 | 0.7645 | | 0.0785 | 1066.67 | 6400 | 1.0569 | 0.7744 | 0.7748 | | 0.0753 | 1100.0 | 6600 | 1.0817 | 0.7698 | 0.7714 | | 0.0745 | 1133.33 | 6800 | 1.0738 | 0.7764 | 0.7769 | | 0.0724 | 1166.67 | 7000 | 1.0998 | 0.7740 | 0.7748 | | 0.0707 | 1200.0 | 7200 | 1.0873 | 0.7693 | 0.7693 | | 0.0696 | 1233.33 | 7400 | 1.1136 | 0.7731 | 0.7734 | | 0.0696 | 1266.67 | 7600 | 1.1034 | 0.7752 | 0.7762 | | 0.068 | 1300.0 | 7800 | 1.1341 | 0.7688 | 0.7693 | | 0.0666 | 1333.33 | 8000 | 1.1392 | 0.7736 | 0.7741 | | 0.0651 | 1366.67 | 8200 | 1.1171 | 0.7733 | 0.7734 | | 0.0645 | 1400.0 | 8400 | 1.1035 | 0.7789 | 0.7789 | | 0.0636 | 1433.33 | 8600 | 1.1064 | 0.7732 | 0.7734 | | 0.0627 | 1466.67 | 8800 | 1.1113 | 0.7691 | 0.7693 | | 0.0628 | 1500.0 | 9000 | 1.1336 | 0.7747 | 0.7755 | | 0.0624 | 1533.33 | 9200 | 1.1229 | 0.7758 | 0.7762 | | 0.062 | 1566.67 | 9400 | 1.1294 | 0.7747 | 0.7755 | | 0.0608 | 1600.0 | 9600 | 1.1267 | 0.7770 | 0.7775 | | 0.0612 | 1633.33 | 9800 | 1.1303 | 0.7743 | 0.7748 | | 0.0606 | 1666.67 | 10000 | 1.1329 | 0.7757 | 0.7762 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H4-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T06:12:00+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_EMP\_H4-seqsight\_4096\_512\_15M-L32\_all ============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset. It achieves the following results on the evaluation set: * Loss: 0.9598 * F1 Score: 0.7788 * Accuracy: 0.7789 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6948 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.4201 | | 2.7625 | 2.0 | 500 | 1.8158 | | 2.7625 | 3.0 | 750 | 1.6948 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cpu - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "my_awesome_qa_model", "results": []}]}
GR1M/my_awesome_qa_model
null
[ "transformers", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:14:16+00:00
[]
[]
TAGS #transformers #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
my\_awesome\_qa\_model ====================== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.6948 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2+cpu * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
object-detection
pytorch
![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/yolov7_quantized/web-assets/model_demo.png) # Yolo-v7-Quantized: Optimized for Mobile Deployment ## Quantized real-time object detection optimized for mobile and edge YoloV7 is a machine learning model that predicts bounding boxes and classes of objects in an image. This model is post-training quantized to int8 using samples from the [COCO dataset](https://cocodataset.org/#home). This model is an implementation of Yolo-v7-Quantized found [here](https://github.com/WongKinYiu/yolov7/). This repository provides scripts to run Yolo-v7-Quantized on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/yolov7_quantized). ### Model Details - **Model Type:** Object detection - **Model Stats:** - Model checkpoint: YoloV7 Tiny - Input resolution: 720p (720x1280) - Number of parameters: 6.24M - Model size: 6.23 MB | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model | ---|---|---|---|---|---|---|---| | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 6.122 ms | 0 - 13 MB | INT8 | NPU | [Yolo-v7-Quantized.tflite](https://huggingface.co/qualcomm/Yolo-v7-Quantized/blob/main/Yolo-v7-Quantized.tflite) | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 5.732 ms | 0 - 12 MB | INT8 | NPU | [Yolo-v7-Quantized.so](https://huggingface.co/qualcomm/Yolo-v7-Quantized/blob/main/Yolo-v7-Quantized.so) ## Installation This model can be installed as a Python package via pip. ```bash pip install "qai-hub-models[yolov7_quantized]" ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.yolov7_quantized.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.yolov7_quantized.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.yolov7_quantized.export ``` ``` Profile Job summary of Yolo-v7-Quantized -------------------------------------------------- Device: QCS8550 (Proxy) (12) Estimated Inference Time: 5.98 ms Estimated Peak Memory Range: 4.71-14.69 MB Compute Units: NPU (220) | Total (220) ``` ## How does this work? This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/Yolo-v7-Quantized/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.yolov7_quantized import Model # Load the model torch_model = Model.from_pretrained() torch_model.eval() # Device device = hub.Device("Samsung Galaxy S23") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.yolov7_quantized.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.yolov7_quantized.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on Yolo-v7-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/yolov7_quantized). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License - The license for the original implementation of Yolo-v7-Quantized can be found [here](https://github.com/WongKinYiu/yolov7/blob/main/LICENSE.md). - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url}) ## References * [YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2207.02696) * [Source Model Implementation](https://github.com/WongKinYiu/yolov7/) ## Community * Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
{"license": "gpl-3.0", "library_name": "pytorch", "tags": ["real_time", "quantized", "android"], "pipeline_tag": "object-detection"}
qualcomm/Yolo-v7-Quantized
null
[ "pytorch", "tflite", "real_time", "quantized", "android", "object-detection", "arxiv:2207.02696", "license:gpl-3.0", "region:us" ]
null
2024-04-17T06:14:46+00:00
[ "2207.02696" ]
[]
TAGS #pytorch #tflite #real_time #quantized #android #object-detection #arxiv-2207.02696 #license-gpl-3.0 #region-us
![](URL Yolo-v7-Quantized: Optimized for Mobile Deployment ================================================== Quantized real-time object detection optimized for mobile and edge ------------------------------------------------------------------ YoloV7 is a machine learning model that predicts bounding boxes and classes of objects in an image. This model is post-training quantized to int8 using samples from the COCO dataset. This model is an implementation of Yolo-v7-Quantized found here. This repository provides scripts to run Yolo-v7-Quantized on Qualcomm® devices. More details on model performance across various devices, can be found here. ### Model Details * Model Type: Object detection * Model Stats: + Model checkpoint: YoloV7 Tiny + Input resolution: 720p (720x1280) + Number of parameters: 6.24M + Model size: 6.23 MB Installation ------------ This model can be installed as a Python package via pip. Configure Qualcomm® AI Hub to run this model on a cloud-hosted device --------------------------------------------------------------------- Sign-in to Qualcomm® AI Hub with your Qualcomm® ID. Once signed in navigate to 'Account -> Settings -> API Token'. With this API token, you can configure your client to run models on the cloud hosted devices. Navigate to docs for more information. Demo off target --------------- The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. The above demo runs a reference implementation of pre-processing, model inference, and post processing. NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. How does this work? ------------------- This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: Compile model for on-device deployment To compile a PyTorch model for on-device deployment, we first trace the model in memory using the 'URL' and then call the 'submit\_compile\_job' API. Step 2: Performance profiling on cloud-hosted device After compiling models from step 1. Models can be profiled model on-device using the 'target\_model'. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. Step 3: Verify on-device accuracy To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access. Run demo on a cloud-hosted device --------------------------------- You can also run the demo on-device. NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). Deploying compiled model to Android ----------------------------------- The models can be deployed using multiple runtimes: * TensorFlow Lite ('.tflite' export): This tutorial provides a guide to deploy the .tflite model in an Android application. * QNN ('.so' export ): This sample app provides instructions on how to use the '.so' shared library in an Android application. View on Qualcomm® AI Hub ------------------------ Get more details on Yolo-v7-Quantized's performance across various devices here. Explore all available models on Qualcomm® AI Hub License ------- * The license for the original implementation of Yolo-v7-Quantized can be found here. * The license for the compiled assets for on-device deployment can be found here References ---------- * YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors * Source Model Implementation Community --------- * Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI. * For questions or feedback please reach out to us.
[ "### Model Details\n\n\n* Model Type: Object detection\n* Model Stats:\n\t+ Model checkpoint: YoloV7 Tiny\n\t+ Input resolution: 720p (720x1280)\n\t+ Number of parameters: 6.24M\n\t+ Model size: 6.23 MB\n\n\n\nInstallation\n------------\n\n\nThis model can be installed as a Python package via pip.\n\n\nConfigure Qualcomm® AI Hub to run this model on a cloud-hosted device\n---------------------------------------------------------------------\n\n\nSign-in to Qualcomm® AI Hub with your\nQualcomm® ID. Once signed in navigate to 'Account -> Settings -> API Token'.\n\n\nWith this API token, you can configure your client to run models on the cloud\nhosted devices.\n\n\nNavigate to docs for more information.\n\n\nDemo off target\n---------------\n\n\nThe package contains a simple end-to-end demo that downloads pre-trained\nweights and runs this model on a sample input.\n\n\nThe above demo runs a reference implementation of pre-processing, model\ninference, and post processing.\n\n\nNOTE: If you want running in a Jupyter Notebook or Google Colab like\nenvironment, please add the following to your cell (instead of the above).", "### Run model on a cloud-hosted device\n\n\nIn addition to the demo, you can also run the model on a cloud-hosted Qualcomm®\ndevice. This script does the following:\n\n\n* Performance check on-device on a cloud-hosted device\n* Downloads compiled assets that can be deployed on-device for Android.\n* Accuracy check between PyTorch and on-device outputs.\n\n\nHow does this work?\n-------------------\n\n\nThis export script\nleverages Qualcomm® AI Hub to optimize, validate, and deploy this model\non-device. Lets go through each step below in detail:\n\n\nStep 1: Compile model for on-device deployment\n\n\nTo compile a PyTorch model for on-device deployment, we first trace the model\nin memory using the 'URL' and then call the 'submit\\_compile\\_job' API.\n\n\nStep 2: Performance profiling on cloud-hosted device\n\n\nAfter compiling models from step 1. Models can be profiled model on-device using the\n'target\\_model'. Note that this scripts runs the model on a device automatically\nprovisioned in the cloud. Once the job is submitted, you can navigate to a\nprovided job URL to view a variety of on-device performance metrics.\n\n\nStep 3: Verify on-device accuracy\n\n\nTo verify the accuracy of the model on-device, you can run on-device inference\non sample input data on the same cloud hosted device.\n\n\nWith the output of the model, you can compute like PSNR, relative errors or\nspot check the output with expected output.\n\n\nNote: This on-device profiling and inference requires access to Qualcomm®\nAI Hub. Sign up for access.\n\n\nRun demo on a cloud-hosted device\n---------------------------------\n\n\nYou can also run the demo on-device.\n\n\nNOTE: If you want running in a Jupyter Notebook or Google Colab like\nenvironment, please add the following to your cell (instead of the above).\n\n\nDeploying compiled model to Android\n-----------------------------------\n\n\nThe models can be deployed using multiple runtimes:\n\n\n* TensorFlow Lite ('.tflite' export): This\ntutorial provides a\nguide to deploy the .tflite model in an Android application.\n* QNN ('.so' export ): This sample\napp\nprovides instructions on how to use the '.so' shared library in an Android application.\n\n\nView on Qualcomm® AI Hub\n------------------------\n\n\nGet more details on Yolo-v7-Quantized's performance across various devices here.\nExplore all available models on Qualcomm® AI Hub\n\n\nLicense\n-------\n\n\n* The license for the original implementation of Yolo-v7-Quantized can be found\nhere.\n* The license for the compiled assets for on-device deployment can be found here\n\n\nReferences\n----------\n\n\n* YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors\n* Source Model Implementation\n\n\nCommunity\n---------\n\n\n* Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.\n* For questions or feedback please reach out to us." ]
[ "TAGS\n#pytorch #tflite #real_time #quantized #android #object-detection #arxiv-2207.02696 #license-gpl-3.0 #region-us \n", "### Model Details\n\n\n* Model Type: Object detection\n* Model Stats:\n\t+ Model checkpoint: YoloV7 Tiny\n\t+ Input resolution: 720p (720x1280)\n\t+ Number of parameters: 6.24M\n\t+ Model size: 6.23 MB\n\n\n\nInstallation\n------------\n\n\nThis model can be installed as a Python package via pip.\n\n\nConfigure Qualcomm® AI Hub to run this model on a cloud-hosted device\n---------------------------------------------------------------------\n\n\nSign-in to Qualcomm® AI Hub with your\nQualcomm® ID. Once signed in navigate to 'Account -> Settings -> API Token'.\n\n\nWith this API token, you can configure your client to run models on the cloud\nhosted devices.\n\n\nNavigate to docs for more information.\n\n\nDemo off target\n---------------\n\n\nThe package contains a simple end-to-end demo that downloads pre-trained\nweights and runs this model on a sample input.\n\n\nThe above demo runs a reference implementation of pre-processing, model\ninference, and post processing.\n\n\nNOTE: If you want running in a Jupyter Notebook or Google Colab like\nenvironment, please add the following to your cell (instead of the above).", "### Run model on a cloud-hosted device\n\n\nIn addition to the demo, you can also run the model on a cloud-hosted Qualcomm®\ndevice. This script does the following:\n\n\n* Performance check on-device on a cloud-hosted device\n* Downloads compiled assets that can be deployed on-device for Android.\n* Accuracy check between PyTorch and on-device outputs.\n\n\nHow does this work?\n-------------------\n\n\nThis export script\nleverages Qualcomm® AI Hub to optimize, validate, and deploy this model\non-device. Lets go through each step below in detail:\n\n\nStep 1: Compile model for on-device deployment\n\n\nTo compile a PyTorch model for on-device deployment, we first trace the model\nin memory using the 'URL' and then call the 'submit\\_compile\\_job' API.\n\n\nStep 2: Performance profiling on cloud-hosted device\n\n\nAfter compiling models from step 1. Models can be profiled model on-device using the\n'target\\_model'. Note that this scripts runs the model on a device automatically\nprovisioned in the cloud. Once the job is submitted, you can navigate to a\nprovided job URL to view a variety of on-device performance metrics.\n\n\nStep 3: Verify on-device accuracy\n\n\nTo verify the accuracy of the model on-device, you can run on-device inference\non sample input data on the same cloud hosted device.\n\n\nWith the output of the model, you can compute like PSNR, relative errors or\nspot check the output with expected output.\n\n\nNote: This on-device profiling and inference requires access to Qualcomm®\nAI Hub. Sign up for access.\n\n\nRun demo on a cloud-hosted device\n---------------------------------\n\n\nYou can also run the demo on-device.\n\n\nNOTE: If you want running in a Jupyter Notebook or Google Colab like\nenvironment, please add the following to your cell (instead of the above).\n\n\nDeploying compiled model to Android\n-----------------------------------\n\n\nThe models can be deployed using multiple runtimes:\n\n\n* TensorFlow Lite ('.tflite' export): This\ntutorial provides a\nguide to deploy the .tflite model in an Android application.\n* QNN ('.so' export ): This sample\napp\nprovides instructions on how to use the '.so' shared library in an Android application.\n\n\nView on Qualcomm® AI Hub\n------------------------\n\n\nGet more details on Yolo-v7-Quantized's performance across various devices here.\nExplore all available models on Qualcomm® AI Hub\n\n\nLicense\n-------\n\n\n* The license for the original implementation of Yolo-v7-Quantized can be found\nhere.\n* The license for the compiled assets for on-device deployment can be found here\n\n\nReferences\n----------\n\n\n* YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors\n* Source Model Implementation\n\n\nCommunity\n---------\n\n\n* Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.\n* For questions or feedback please reach out to us." ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2680 - F1: 0.8403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.558 | 1.0 | 191 | 0.3285 | 0.7843 | | 0.2588 | 2.0 | 382 | 0.2693 | 0.8234 | | 0.17 | 3.0 | 573 | 0.2680 | 0.8403 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-fr", "results": []}]}
cogsci13/xlm-roberta-base-finetuned-panx-fr
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:16:46+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-fr ================================== This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2680 * F1: 0.8403 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# NumFa 3B 1 epoch NumFa 3B 1 epoch is the first epoch of NumFa 3B model. (checkpoint2) Base model: openllama3b **For testing only** ## Model Details ### Model Description The model was trained by TPU. This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** NumFa - **Model type:** text-generation - **Language(s) (NLP):** English - **License:** apache-2.0 ### Out-of-Scope Use Math, Coding, and other language ## Bias, Risks, and Limitations The model can has a bias from dataset. Use at your own risks! ## How to Get Started with the Model Use the code below to get started with the model. **Example** 1. ```python # !pip install accelerate sentencepiece transformers bitsandbytes import torch from transformers import pipeline pipe = pipeline("text-generation", model="numfa_3b_1epoch", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating outputs = pipe("test is", max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.) print(outputs[0]["generated_text"]) ```
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation"}
numfa/numfa_3b_1epoch
null
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T06:17:51+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# NumFa 3B 1 epoch NumFa 3B 1 epoch is the first epoch of NumFa 3B model. (checkpoint2) Base model: openllama3b For testing only ## Model Details ### Model Description The model was trained by TPU. This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: NumFa - Model type: text-generation - Language(s) (NLP): English - License: apache-2.0 ### Out-of-Scope Use Math, Coding, and other language ## Bias, Risks, and Limitations The model can has a bias from dataset. Use at your own risks! ## How to Get Started with the Model Use the code below to get started with the model. Example 1.
[ "# NumFa 3B 1 epoch\n\nNumFa 3B 1 epoch is the first epoch of NumFa 3B model. (checkpoint2)\n\nBase model: openllama3b\n\nFor testing only", "## Model Details", "### Model Description\n\nThe model was trained by TPU.\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: NumFa\n- Model type: text-generation\n- Language(s) (NLP): English\n- License: apache-2.0", "### Out-of-Scope Use\n\nMath, Coding, and other language", "## Bias, Risks, and Limitations\n\nThe model can has a bias from dataset. Use at your own risks!", "## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\nExample\n\n1." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# NumFa 3B 1 epoch\n\nNumFa 3B 1 epoch is the first epoch of NumFa 3B model. (checkpoint2)\n\nBase model: openllama3b\n\nFor testing only", "## Model Details", "### Model Description\n\nThe model was trained by TPU.\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: NumFa\n- Model type: text-generation\n- Language(s) (NLP): English\n- License: apache-2.0", "### Out-of-Scope Use\n\nMath, Coding, and other language", "## Bias, Risks, and Limitations\n\nThe model can has a bias from dataset. Use at your own risks!", "## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\nExample\n\n1." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
indiana500/gpt2-base-fine-tuned-binary-classification
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T06:17:58+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2602 - F1: 0.8334 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7096 | 1.0 | 70 | 0.3180 | 0.7447 | | 0.2814 | 2.0 | 140 | 0.2529 | 0.7987 | | 0.1681 | 3.0 | 210 | 0.2602 | 0.8334 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-it", "results": []}]}
cogsci13/xlm-roberta-base-finetuned-panx-it
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:18:13+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-it ================================== This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2602 * F1: 0.8334 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3999 - F1: 0.6814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0227 | 1.0 | 50 | 0.5195 | 0.5378 | | 0.4796 | 2.0 | 100 | 0.4254 | 0.6937 | | 0.3709 | 3.0 | 150 | 0.3999 | 0.6814 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-en", "results": []}]}
cogsci13/xlm-roberta-base-finetuned-panx-en
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:19:03+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-en ================================== This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3999 * F1: 0.6814 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/lightblue/Karasu-Mixtral-8x22B-v0.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 29.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 32.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 42.1 | | | [GGUF](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 42.7 | | | [GGUF](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 46.8 | | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 52.2 | IQ3_XXS probably better | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 55.0 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 58.3 | | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 61.6 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 61.6 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 64.6 | | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 67.9 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 72.7 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 75.6 | | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 80.0 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 80.6 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 85.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 97.1 | | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q5_K_M.gguf.part3of3) | i1-Q5_K_M | 100.1 | | | [PART 1](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF/resolve/main/Karasu-Mixtral-8x22B-v0.1.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 115.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["openchat/openchat_sharegpt4_dataset"], "base_model": "lightblue/Karasu-Mixtral-8x22B-v0.1", "quantized_by": "mradermacher"}
mradermacher/Karasu-Mixtral-8x22B-v0.1-i1-GGUF
null
[ "transformers", "gguf", "en", "dataset:openchat/openchat_sharegpt4_dataset", "base_model:lightblue/Karasu-Mixtral-8x22B-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:21:21+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #dataset-openchat/openchat_sharegpt4_dataset #base_model-lightblue/Karasu-Mixtral-8x22B-v0.1 #license-apache-2.0 #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #dataset-openchat/openchat_sharegpt4_dataset #base_model-lightblue/Karasu-Mixtral-8x22B-v0.1 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3-seqsight_4096_512_15M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5628 - F1 Score: 0.7388 - Accuracy: 0.7388 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:| | 0.5606 | 33.33 | 200 | 0.5240 | 0.7455 | 0.7462 | | 0.4423 | 66.67 | 400 | 0.5672 | 0.7521 | 0.7522 | | 0.4048 | 100.0 | 600 | 0.5936 | 0.7475 | 0.7475 | | 0.3725 | 133.33 | 800 | 0.6502 | 0.7481 | 0.7482 | | 0.3415 | 166.67 | 1000 | 0.6948 | 0.7328 | 0.7328 | | 0.3135 | 200.0 | 1200 | 0.7044 | 0.7314 | 0.7315 | | 0.2876 | 233.33 | 1400 | 0.7658 | 0.7334 | 0.7335 | | 0.2666 | 266.67 | 1600 | 0.7936 | 0.7268 | 0.7268 | | 0.2452 | 300.0 | 1800 | 0.8402 | 0.7286 | 0.7288 | | 0.2269 | 333.33 | 2000 | 0.8775 | 0.7375 | 0.7375 | | 0.2104 | 366.67 | 2200 | 0.8565 | 0.7318 | 0.7321 | | 0.1971 | 400.0 | 2400 | 0.9102 | 0.7455 | 0.7455 | | 0.1854 | 433.33 | 2600 | 0.9282 | 0.7346 | 0.7348 | | 0.1745 | 466.67 | 2800 | 0.9865 | 0.7388 | 0.7388 | | 0.1658 | 500.0 | 3000 | 0.9656 | 0.7382 | 0.7381 | | 0.1558 | 533.33 | 3200 | 0.9974 | 0.7422 | 0.7422 | | 0.1483 | 566.67 | 3400 | 0.9639 | 0.7342 | 0.7341 | | 0.1409 | 600.0 | 3600 | 1.0181 | 0.7335 | 0.7335 | | 0.1349 | 633.33 | 3800 | 1.0474 | 0.7400 | 0.7401 | | 0.1289 | 666.67 | 4000 | 1.0460 | 0.7348 | 0.7348 | | 0.1233 | 700.0 | 4200 | 1.0598 | 0.7361 | 0.7361 | | 0.1187 | 733.33 | 4400 | 1.0877 | 0.7441 | 0.7442 | | 0.1146 | 766.67 | 4600 | 1.1184 | 0.7440 | 0.7442 | | 0.11 | 800.0 | 4800 | 1.1455 | 0.7351 | 0.7355 | | 0.1065 | 833.33 | 5000 | 1.1937 | 0.7358 | 0.7361 | | 0.1038 | 866.67 | 5200 | 1.1647 | 0.7368 | 0.7368 | | 0.0996 | 900.0 | 5400 | 1.1829 | 0.7395 | 0.7395 | | 0.097 | 933.33 | 5600 | 1.1467 | 0.7353 | 0.7355 | | 0.0942 | 966.67 | 5800 | 1.1414 | 0.7408 | 0.7408 | | 0.092 | 1000.0 | 6000 | 1.1551 | 0.7367 | 0.7368 | | 0.0892 | 1033.33 | 6200 | 1.1560 | 0.7341 | 0.7341 | | 0.0875 | 1066.67 | 6400 | 1.2276 | 0.7399 | 0.7401 | | 0.0843 | 1100.0 | 6600 | 1.1638 | 0.7317 | 0.7321 | | 0.083 | 1133.33 | 6800 | 1.2357 | 0.7339 | 0.7341 | | 0.0823 | 1166.67 | 7000 | 1.2278 | 0.7366 | 0.7368 | | 0.0785 | 1200.0 | 7200 | 1.2453 | 0.7346 | 0.7348 | | 0.077 | 1233.33 | 7400 | 1.2399 | 0.7333 | 0.7335 | | 0.0758 | 1266.67 | 7600 | 1.2508 | 0.7347 | 0.7348 | | 0.0754 | 1300.0 | 7800 | 1.2100 | 0.7361 | 0.7361 | | 0.0735 | 1333.33 | 8000 | 1.2400 | 0.7333 | 0.7335 | | 0.0724 | 1366.67 | 8200 | 1.2479 | 0.7325 | 0.7328 | | 0.0722 | 1400.0 | 8400 | 1.2592 | 0.7353 | 0.7355 | | 0.0717 | 1433.33 | 8600 | 1.2564 | 0.7346 | 0.7348 | | 0.0702 | 1466.67 | 8800 | 1.2764 | 0.7373 | 0.7375 | | 0.0685 | 1500.0 | 9000 | 1.2851 | 0.7387 | 0.7388 | | 0.0688 | 1533.33 | 9200 | 1.2853 | 0.7387 | 0.7388 | | 0.0689 | 1566.67 | 9400 | 1.2792 | 0.7374 | 0.7375 | | 0.0673 | 1600.0 | 9600 | 1.2888 | 0.7413 | 0.7415 | | 0.0668 | 1633.33 | 9800 | 1.2841 | 0.7367 | 0.7368 | | 0.0669 | 1666.67 | 10000 | 1.2935 | 0.7380 | 0.7381 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3-seqsight_4096_512_15M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3-seqsight_4096_512_15M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-04-17T06:21:38+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
GUE\_EMP\_H3-seqsight\_4096\_512\_15M-L32\_all ============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5628 * F1 Score: 0.7388 * Accuracy: 0.7388 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1745 - F1: 0.8544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2971 | 1.0 | 835 | 0.2108 | 0.8076 | | 0.1566 | 2.0 | 1670 | 0.1722 | 0.8470 | | 0.1033 | 3.0 | 2505 | 0.1745 | 0.8544 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-all", "results": []}]}
cogsci13/xlm-roberta-base-finetuned-panx-all
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T06:23:03+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-all =================================== This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1745 * F1: 0.8544 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_Instruction0_ASPOL This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_Instruction0_ASPOL", "results": []}]}
ThuyNT/CS505_COQE_viT5_Instruction0_ASPOL
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T06:27:54+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_Instruction0_ASPOL This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_Instruction0_ASPOL\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_Instruction0_ASPOL\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]