pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
question-answering
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # public-finance-mistral This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.2368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.3 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 6 | 1.2601 | | 1.906 | 2.0 | 12 | 0.7329 | | 1.906 | 3.0 | 18 | 0.4273 | | 0.9496 | 4.0 | 24 | 0.2872 | | 0.7822 | 5.0 | 30 | 0.2423 | | 0.7822 | 6.0 | 36 | 0.2370 | | 0.6482 | 7.0 | 42 | 0.2368 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["en"], "license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator", "clement-cvll/public-finance-generated-qa"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "pipeline_tag": "question-answering", "model-index": [{"name": "public-finance-mistral", "results": []}]}
clement-cvll/public-finance-mistral
null
[ "peft", "tensorboard", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "question-answering", "en", "dataset:generator", "dataset:clement-cvll/public-finance-generated-qa", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "4-bit", "region:us" ]
null
2024-04-15T20:41:30+00:00
[]
[ "en" ]
TAGS #peft #tensorboard #safetensors #mistral #trl #sft #generated_from_trainer #question-answering #en #dataset-generator #dataset-clement-cvll/public-finance-generated-qa #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #4-bit #region-us
public-finance-mistral ====================== This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset. It achieves the following results on the evaluation set: * Loss: 0.2368 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.3 * num\_epochs: 7 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.3\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #mistral #trl #sft #generated_from_trainer #question-answering #en #dataset-generator #dataset-clement-cvll/public-finance-generated-qa #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #4-bit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.3\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-samsum-finetuned This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1331 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.121 | 1.0 | 74 | 0.1348 | | 0.0903 | 2.0 | 148 | 0.1331 | | 0.0795 | 3.0 | 222 | 0.1331 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "facebook/bart-large-cnn", "model-index": [{"name": "bart-samsum-finetuned", "results": []}]}
raffenmb/bart-samsum-finetuned
null
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T20:42:08+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us
bart-samsum-finetuned ===================== This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1331 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="girayo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
girayo/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-15T20:43:07+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/uncensorie/stairolz-70b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/stairolz-70b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 39.1 | slightly worse than Q4_K_S | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF/resolve/main/stairolz-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama2", "library_name": "transformers", "base_model": "uncensorie/stairolz-70b", "quantized_by": "mradermacher"}
mradermacher/stairolz-70b-i1-GGUF
null
[ "transformers", "gguf", "en", "base_model:uncensorie/stairolz-70b", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-15T20:43:45+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-uncensorie/stairolz-70b #license-llama2 #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-uncensorie/stairolz-70b #license-llama2 #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
santoshtyss/lex_mistral_final_53000
null
[ "transformers", "safetensors", "mistral", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T20:44:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #feature-extraction #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #feature-extraction #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: MLIsaac/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
MLIsaac/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-15T20:46:20+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: MLIsaac/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: MLIsaac/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: MLIsaac/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_4096_512_46M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 1.6512 - F1 Score: 0.5826 - Accuracy: 0.5831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6574 | 25.0 | 200 | 0.7103 | 0.5961 | 0.5985 | | 0.5433 | 50.0 | 400 | 0.7960 | 0.5752 | 0.5762 | | 0.467 | 75.0 | 600 | 0.8451 | 0.5725 | 0.5725 | | 0.4062 | 100.0 | 800 | 0.9353 | 0.5606 | 0.5651 | | 0.3515 | 125.0 | 1000 | 1.0114 | 0.5800 | 0.5820 | | 0.3084 | 150.0 | 1200 | 1.0154 | 0.5858 | 0.5858 | | 0.271 | 175.0 | 1400 | 1.1725 | 0.5735 | 0.5757 | | 0.2374 | 200.0 | 1600 | 1.2035 | 0.5896 | 0.5895 | | 0.2113 | 225.0 | 1800 | 1.1801 | 0.5889 | 0.5890 | | 0.1879 | 250.0 | 2000 | 1.3613 | 0.5917 | 0.5916 | | 0.1685 | 275.0 | 2200 | 1.3616 | 0.5912 | 0.5911 | | 0.1538 | 300.0 | 2400 | 1.4086 | 0.5907 | 0.5916 | | 0.141 | 325.0 | 2600 | 1.3944 | 0.5928 | 0.5932 | | 0.1263 | 350.0 | 2800 | 1.4528 | 0.5880 | 0.5879 | | 0.1163 | 375.0 | 3000 | 1.4832 | 0.5868 | 0.5868 | | 0.1082 | 400.0 | 3200 | 1.5377 | 0.5825 | 0.5826 | | 0.1001 | 425.0 | 3400 | 1.5248 | 0.5853 | 0.5852 | | 0.0927 | 450.0 | 3600 | 1.5802 | 0.5875 | 0.5874 | | 0.0862 | 475.0 | 3800 | 1.6188 | 0.5979 | 0.5980 | | 0.0792 | 500.0 | 4000 | 1.6380 | 0.5885 | 0.5884 | | 0.0759 | 525.0 | 4200 | 1.6482 | 0.5943 | 0.5943 | | 0.0707 | 550.0 | 4400 | 1.7708 | 0.5832 | 0.5831 | | 0.0682 | 575.0 | 4600 | 1.6190 | 0.5930 | 0.5932 | | 0.0634 | 600.0 | 4800 | 1.6931 | 0.5886 | 0.5884 | | 0.0598 | 625.0 | 5000 | 1.7523 | 0.5853 | 0.5852 | | 0.0568 | 650.0 | 5200 | 1.6618 | 0.5880 | 0.5884 | | 0.0548 | 675.0 | 5400 | 1.7466 | 0.5867 | 0.5868 | | 0.0515 | 700.0 | 5600 | 1.7501 | 0.5782 | 0.5783 | | 0.0502 | 725.0 | 5800 | 1.7977 | 0.5901 | 0.5905 | | 0.047 | 750.0 | 6000 | 1.7867 | 0.5795 | 0.5794 | | 0.0457 | 775.0 | 6200 | 1.8632 | 0.5747 | 0.5746 | | 0.0437 | 800.0 | 6400 | 1.8232 | 0.5798 | 0.5805 | | 0.0422 | 825.0 | 6600 | 1.8684 | 0.5803 | 0.5805 | | 0.0399 | 850.0 | 6800 | 1.8498 | 0.5844 | 0.5847 | | 0.0385 | 875.0 | 7000 | 1.8414 | 0.5823 | 0.5836 | | 0.038 | 900.0 | 7200 | 1.8976 | 0.5843 | 0.5842 | | 0.037 | 925.0 | 7400 | 1.8720 | 0.5779 | 0.5778 | | 0.0348 | 950.0 | 7600 | 1.9380 | 0.5799 | 0.5799 | | 0.0342 | 975.0 | 7800 | 1.9170 | 0.5859 | 0.5858 | | 0.0329 | 1000.0 | 8000 | 1.9431 | 0.5873 | 0.5874 | | 0.0327 | 1025.0 | 8200 | 1.9294 | 0.5831 | 0.5831 | | 0.0316 | 1050.0 | 8400 | 1.9850 | 0.5793 | 0.5794 | | 0.0312 | 1075.0 | 8600 | 1.9593 | 0.5830 | 0.5831 | | 0.0298 | 1100.0 | 8800 | 1.9691 | 0.5831 | 0.5831 | | 0.0294 | 1125.0 | 9000 | 1.9704 | 0.5811 | 0.5810 | | 0.0288 | 1150.0 | 9200 | 1.9948 | 0.5783 | 0.5783 | | 0.0282 | 1175.0 | 9400 | 1.9939 | 0.5838 | 0.5836 | | 0.028 | 1200.0 | 9600 | 1.9769 | 0.5840 | 0.5842 | | 0.0275 | 1225.0 | 9800 | 1.9609 | 0.5864 | 0.5863 | | 0.0274 | 1250.0 | 10000 | 1.9741 | 0.5848 | 0.5847 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_mouse_4-seqsight_4096_512_46M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_4096_512_46M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-15T20:47:14+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_mouse\_4-seqsight\_4096\_512\_46M-L32\_all =============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset. It achieves the following results on the evaluation set: * Loss: 1.6512 * F1 Score: 0.5826 * Accuracy: 0.5831 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="girayo/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
girayo/Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-15T20:49:26+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="dragonflymoss/taxi_model", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi_model", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.50 +/- 2.73", "name": "mean_reward", "verified": false}]}]}]}
dragonflymoss/taxi_model
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-15T20:50:29+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
BohdanPetryshyn/openapi-completion-merged
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T20:51:37+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "transformers", "base_model": "mistralai/Mistral-7B-v0.1", "pipeline_tag": "text-generation"}
Jyotiyadav/cp2
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T20:55:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
batprem/my-controlnet-model
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2024-04-15T20:59:13+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for OLMo 1.7-7B **For transformers versions v4.40.0 or newer, please use [OLMo 1.7-7B HF](https://huggingface.co/allenai/OLMo-1.7-7B-hf) instead.** OLMo 1.7 7B is the latest version of the original [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) model rocking a 24 point increase in MMLU, among other evaluations improvements, from an improved version of the Dolma dataset and staged training. OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models. The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset. We release all code, checkpoints, logs, and details involved in training these models. ## Model Details The core models released in this batch are the following: | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length | |------|--------|---------|-------------|-----------------|----------------| | [OLMo 1B](https://huggingface.co/allenai/OLMo-1B) | 3 Trillion |16 | 2048 | 16 | 2048 | | [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) | 2.5 Trillion | 32 | 4096 | 32 | 2048 | | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 | | [OLMo 1.7-7B](https://huggingface.co/allenai/OLMo-1.7-7B) | 2.05 Trillion | 32 | 4096 | 32 | 4096 | *Note: OLMo 1.7-7B also includes QKV clipping.* [Coming soon] We are releasing many checkpoints for these models, for every 1000 training steps. The naming convention is `step1000-tokens4B`. To load a specific model revision with HuggingFace, simply add the argument `revision`: ```bash import hf_olmo # pip install ai2-olmo olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-7B", revision="step1000-tokens4B") ``` All revisions/branches are listed in the file `revisions.txt`. Or, you can access all the revisions for the models via the following code snippet: ```python from huggingface_hub import list_repo_refs out = list_repo_refs("allenai/OLMo-1.7-7B") branches = [b.name for b in out.branches] ``` A few revisions were lost due to an error, but the vast majority are present. ### Model Description - **Developed by:** Allen Institute for AI (AI2) - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW - **Model type:** a Transformer style autoregressive language model. - **Language(s) (NLP):** English - **License:** The code and model are released under Apache 2.0. - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org` - **Date cutoff:** Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version. ### Model Sources - **Project Page:** https://allenai.org/olmo - **Repositories:** - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo - Evaluation code: https://github.com/allenai/OLMo-Eval - Further fine-tuning code: https://github.com/allenai/open-instruct - **Paper:** [Link](https://arxiv.org/abs/2402.00838) - **Technical blog post:** https://blog.allenai.org/olmo-1-7-7b-a-24-point-improvement-on-mmlu-92b43f7d269d - **W&B Logs:** [pretraining](https://wandb.ai/ai2-llm/OLMo-7B/groups/OLMo-1.7-7B), [annealing](https://wandb.ai/ai2-llm/OLMo-7B/groups/OLMo-1.7-7B-anneal) <!-- - **Press release:** TODO --> ## Uses ### Inference *Note: The OLMo models will shortly be included in Transformers.* When the [PR](https://github.com/huggingface/transformers/pull/29890) is merged, you will no longer need to use `trust_remote_code=True` or install `ai2-olmo` to use the model. Then, install Transformers [from source](https://huggingface.co/docs/transformers/en/installation#install-from-source). Quickly get inference running with the following required installation: ```bash pip install ai2-olmo ``` Now, proceed as usual with HuggingFace: ```python import hf_olmo from transformers import AutoModelForCausalLM, AutoTokenizer olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-7B") tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1.7-7B") message = ["Language modeling is "] inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False) # optional verifying cuda # inputs = {k: v.to('cuda') for k,v in inputs.items()} # olmo = olmo.to('cuda') response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) print(tokenizer.batch_decode(response, skip_special_tokens=True)[0]) >> 'Language modeling is the first step to build natural language generation...' ``` Alternatively, with the pipeline abstraction: ```python import hf_olmo from transformers import pipeline olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1.7-7B") print(olmo_pipe("Language modeling is ")) >> 'Language modeling is a branch of natural language processing that aims to...' ``` Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-7B", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`). The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues. Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer. ```bash raise ImportError( ImportError: This modeling file requires the following packages that were not found in your environment: hf_olmo. Run `pip install hf_olmo` ``` ### Fine-tuning Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available. 1. Fine-tune with the OLMo repository: ```bash torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \ --data.paths=[{path_to_data}/input_ids.npy] \ --data.label_mask_paths=[{path_to_data}/label_mask.npy] \ --load_path={path_to_checkpoint} \ --reset_trainer_state ``` For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo?tab=readme-ov-file#fine-tuning). 2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are [here](https://github.com/allenai/open-instruct). ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> Core model results for the new and original 7B model are found below. | Task | Llama-7b | Llama2-7b | Falcon-7b | Mpt-7b | OLMo-7B | Llama2-13b | **OLMo 1.7-7B** | |-------------------|----------|-----------|-----------|--------|---------|------------|-------------| | arc_c | 44.5 | 48.5 | 47.5 | 46.5 | 48.5 | 52.8 | 42.5 | | arc_e | 67.9 | 69.5 | 70.4 | 70.5 | 65.4 | 73.7 | 67.2 | | boolq | 75.4 | 80.2 | 74.6 | 74.2 | 73.4 | 82.2 | 83.7 | | copa | 91.0 | 86.0 | 86.0 | 85.0 | 90.0 | 90.0 | 86.0 | | hellaswag | 76.2 | 76.8 | 75.9 | 77.6 | 76.4 | 78.6 | 75.5 | | openbookqa | 51.2 | 48.4 | 53.0 | 48.6 | 50.4 | 51.8 | 50.0 | | piqa | 77.2 | 76.7 | 78.5 | 77.3 | 78.4 | 79.0 | 77.5 | | sciq | 93.9 | 94.5 | 93.9 | 93.7 | 93.8 | 95.5 | 96.7 | | winogrande | 70.5 | 69.4 | 68.9 | 69.9 | 67.9 | 73.5 | 69.8 | | truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33.0 | 36.0 | 36.8 | 35.8 | | MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 | 55.5 | 52.0 | | GSM8k | 10.0 | 12.0 | 4.0 | 4.5 | 8.5 | 25.0 | 29.0 | | Full average | 60.3 | 62.1 | 59.2 | 59.3 | 59.8 | 66.2 | 63.8 | And for the 1B model: | task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) | | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- | | arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 | | arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 | | boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 | | copa | 50 | 84 | 72 | 78 | 79 | | hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 | | openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 | | piqa | 50 | 74 | 69.1 | 71.1 | 73.7 | | sciq | 25 | 94.7 | 86 | 90.5 | 88.1 | | winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 | | Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 | \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging. ## Model Details ### Data For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation. **This model uses the new 1.7 version with more data sources, better deduplication, and quality filtering**. During the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0. ### Staged training / annealing In contrast to OLMo 1.0, we trained OLMo 1.7 with a two-stage curriculum: * In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2T tokens, when the learning rate is still high. * At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below. Both stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 1.7 already outperforms OLMo 1.0. The second stage consistently adds 2 to 3 points of performance on top. ### Architecture OLMo 7B architecture with peer models for comparison. | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B | |------------------------|-------------------|---------------------|--------------------|--------------------|------------------| | d_model | 4096 | 4096 | 4096 | 4544 | 4096 | | num heads | 32 | 32 | 32 | 71 | 16 | | num layers | 32 | 32 | 32 | 32 | 32 | | MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 | | LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN | | pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE | | attention variant | full | GQA | full | MQA | MQA | | biases | none | none | in LN only | in LN only | none | | block type | sequential | sequential | sequential | parallel | parallel | | activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU | | sequence length | 2048 | 4096 | 2048 | 2048 | 2048 | | batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 | | batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M | | weight tying | no | no | no | no | yes | ### Hyperparameters AdamW optimizer parameters are shown below. | Size | Peak LR | Betas | Epsilon | Weight Decay | |------|------------|-----------------|-------------|--------------| | 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 | | 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 | Optimizer settings comparison with peer models. | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | |-----------------------|------------------|---------------------|--------------------|--------------------| | warmup steps | 5000 | 2000 | 2000 | 1000 | | peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 | | minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 | | weight decay | 0.1 | 0.1 | 0.1 | 0.1 | | beta1 | 0.9 | 0.9 | 0.9 | 0.99 | | beta2 | 0.95 | 0.95 | 0.95 | 0.999 | | epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 | | LR schedule | linear | cosine | cosine | cosine | | gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 | | gradient reduce dtype | FP32 | FP32 | FP32 | BF16 | | optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 | ## Environmental Impact OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML. A summary of the environmental impact. Further details are available in the paper. | | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) | |-----------|------------|-----------------------------|--------------------------------|---------------------------| | OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* | | OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 | ## Bias, Risks, and Limitations Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content. Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology. Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked. ## Citation **BibTeX:** ``` @article{Groeneveld2023OLMo, title={OLMo: Accelerating the Science of Language Models}, author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh}, journal={Preprint}, year={2024} } ``` **APA:** Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint. ## Model Card Contact For errors in this model card, contact Nathan, `{nathanl} at allenai dot org`.
{"language": ["en"], "license": "apache-2.0", "datasets": ["allenai/dolma"]}
allenai/OLMo-1.7-7B
null
[ "transformers", "pytorch", "olmo", "text-generation", "en", "dataset:allenai/dolma", "arxiv:2402.00838", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-15T21:03:34+00:00
[ "2402.00838" ]
[ "en" ]
TAGS #transformers #pytorch #olmo #text-generation #en #dataset-allenai/dolma #arxiv-2402.00838 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
<img src="URL alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for OLMo 1.7-7B ========================== For transformers versions v4.40.0 or newer, please use OLMo 1.7-7B HF instead. OLMo 1.7 7B is the latest version of the original OLMo 7B model rocking a 24 point increase in MMLU, among other evaluations improvements, from an improved version of the Dolma dataset and staged training. OLMo is a series of Open Language Models designed to enable the science of language models. The OLMo models are trained on the Dolma dataset. We release all code, checkpoints, logs, and details involved in training these models. Model Details ------------- The core models released in this batch are the following: *Note: OLMo 1.7-7B also includes QKV clipping.* [Coming soon] We are releasing many checkpoints for these models, for every 1000 training steps. The naming convention is 'step1000-tokens4B'. To load a specific model revision with HuggingFace, simply add the argument 'revision': All revisions/branches are listed in the file 'URL'. Or, you can access all the revisions for the models via the following code snippet: A few revisions were lost due to an error, but the vast majority are present. ### Model Description * Developed by: Allen Institute for AI (AI2) * Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW * Model type: a Transformer style autoregressive language model. * Language(s) (NLP): English * License: The code and model are released under Apache 2.0. * Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org' * Date cutoff: Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version. ### Model Sources * Project Page: URL * Repositories: + Core repo (training, inference, fine-tuning etc.): URL + Evaluation code: URL + Further fine-tuning code: URL * Paper: Link * Technical blog post: URL * W&B Logs: pretraining, annealing Uses ---- ### Inference *Note: The OLMo models will shortly be included in Transformers.* When the PR is merged, you will no longer need to use 'trust\_remote\_code=True' or install 'ai2-olmo' to use the model. Then, install Transformers from source. Quickly get inference running with the following required installation: Now, proceed as usual with HuggingFace: Alternatively, with the pipeline abstraction: Or, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\_pretrained("allenai/OLMo-1.7-7B", torch\_dtype=torch.float16, load\_in\_8bit=True)' (requires 'bitsandbytes'). The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\_ids.to('cuda')' to avoid potential issues. Note, you may see the following error if 'ai2-olmo' is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer. ### Fine-tuning Model fine-tuning can be done from the final checkpoint (the 'main' revision of this model) or many intermediate checkpoints. Two recipes for tuning are available. 1. Fine-tune with the OLMo repository: For more documentation, see the GitHub readme. 2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here. Evaluation ---------- Core model results for the new and original 7B model are found below. And for the 1B model: \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging. Model Details ------------- ### Data For training data details, please see the Dolma documentation. This model uses the new 1.7 version with more data sources, better deduplication, and quality filtering. During the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0. ### Staged training / annealing In contrast to OLMo 1.0, we trained OLMo 1.7 with a two-stage curriculum: * In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2T tokens, when the learning rate is still high. * At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below. Both stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 1.7 already outperforms OLMo 1.0. The second stage consistently adds 2 to 3 points of performance on top. ### Architecture OLMo 7B architecture with peer models for comparison. ### Hyperparameters AdamW optimizer parameters are shown below. Optimizer settings comparison with peer models. Environmental Impact -------------------- OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML. A summary of the environmental impact. Further details are available in the paper. Bias, Risks, and Limitations ---------------------------- Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content. Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology. Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked. BibTeX: APA: Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint. Model Card Contact ------------------ For errors in this model card, contact Nathan, '{nathanl} at allenai dot org'.
[ "### Model Description\n\n\n* Developed by: Allen Institute for AI (AI2)\n* Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW\n* Model type: a Transformer style autoregressive language model.\n* Language(s) (NLP): English\n* License: The code and model are released under Apache 2.0.\n* Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org'\n* Date cutoff: Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version.", "### Model Sources\n\n\n* Project Page: URL\n* Repositories:\n\t+ Core repo (training, inference, fine-tuning etc.): URL\n\t+ Evaluation code: URL\n\t+ Further fine-tuning code: URL\n* Paper: Link\n* Technical blog post: URL\n* W&B Logs: pretraining, annealing\n\n\nUses\n----", "### Inference\n\n\n*Note: The OLMo models will shortly be included in Transformers.*\nWhen the PR is merged, you will no longer need to use 'trust\\_remote\\_code=True' or install 'ai2-olmo' to use the model.\nThen, install Transformers from source.\n\n\nQuickly get inference running with the following required installation:\n\n\nNow, proceed as usual with HuggingFace:\n\n\nAlternatively, with the pipeline abstraction:\n\n\nOr, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\\_pretrained(\"allenai/OLMo-1.7-7B\", torch\\_dtype=torch.float16, load\\_in\\_8bit=True)' (requires 'bitsandbytes').\nThe quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\\_ids.to('cuda')' to avoid potential issues.\n\n\nNote, you may see the following error if 'ai2-olmo' is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.", "### Fine-tuning\n\n\nModel fine-tuning can be done from the final checkpoint (the 'main' revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.\n\n\n1. Fine-tune with the OLMo repository:\n\n\nFor more documentation, see the GitHub readme.\n\n\n2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.\n\n\nEvaluation\n----------\n\n\nCore model results for the new and original 7B model are found below.\n\n\n\nAnd for the 1B model:\n\n\n\n\\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.\n\n\nModel Details\n-------------", "### Data\n\n\nFor training data details, please see the Dolma documentation.\nThis model uses the new 1.7 version with more data sources, better deduplication, and quality filtering.\nDuring the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0.", "### Staged training / annealing\n\n\nIn contrast to OLMo 1.0, we trained OLMo 1.7 with a two-stage curriculum:\n\n\n* In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2T tokens, when the learning rate is still high.\n* At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below.\nBoth stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 1.7 already outperforms OLMo 1.0. The second stage consistently adds 2 to 3 points of performance on top.", "### Architecture\n\n\nOLMo 7B architecture with peer models for comparison.", "### Hyperparameters\n\n\nAdamW optimizer parameters are shown below.\n\n\n\nOptimizer settings comparison with peer models.\n\n\n\nEnvironmental Impact\n--------------------\n\n\nOLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.\nA summary of the environmental impact. Further details are available in the paper.\n\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nLike any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.\nSuch content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.\n\n\nOtherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.\n\n\nBibTeX:\n\n\nAPA:\n\n\nGroeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.\n\n\nModel Card Contact\n------------------\n\n\nFor errors in this model card, contact Nathan, '{nathanl} at allenai dot org'." ]
[ "TAGS\n#transformers #pytorch #olmo #text-generation #en #dataset-allenai/dolma #arxiv-2402.00838 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Model Description\n\n\n* Developed by: Allen Institute for AI (AI2)\n* Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW\n* Model type: a Transformer style autoregressive language model.\n* Language(s) (NLP): English\n* License: The code and model are released under Apache 2.0.\n* Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org'\n* Date cutoff: Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version.", "### Model Sources\n\n\n* Project Page: URL\n* Repositories:\n\t+ Core repo (training, inference, fine-tuning etc.): URL\n\t+ Evaluation code: URL\n\t+ Further fine-tuning code: URL\n* Paper: Link\n* Technical blog post: URL\n* W&B Logs: pretraining, annealing\n\n\nUses\n----", "### Inference\n\n\n*Note: The OLMo models will shortly be included in Transformers.*\nWhen the PR is merged, you will no longer need to use 'trust\\_remote\\_code=True' or install 'ai2-olmo' to use the model.\nThen, install Transformers from source.\n\n\nQuickly get inference running with the following required installation:\n\n\nNow, proceed as usual with HuggingFace:\n\n\nAlternatively, with the pipeline abstraction:\n\n\nOr, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\\_pretrained(\"allenai/OLMo-1.7-7B\", torch\\_dtype=torch.float16, load\\_in\\_8bit=True)' (requires 'bitsandbytes').\nThe quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\\_ids.to('cuda')' to avoid potential issues.\n\n\nNote, you may see the following error if 'ai2-olmo' is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.", "### Fine-tuning\n\n\nModel fine-tuning can be done from the final checkpoint (the 'main' revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.\n\n\n1. Fine-tune with the OLMo repository:\n\n\nFor more documentation, see the GitHub readme.\n\n\n2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.\n\n\nEvaluation\n----------\n\n\nCore model results for the new and original 7B model are found below.\n\n\n\nAnd for the 1B model:\n\n\n\n\\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.\n\n\nModel Details\n-------------", "### Data\n\n\nFor training data details, please see the Dolma documentation.\nThis model uses the new 1.7 version with more data sources, better deduplication, and quality filtering.\nDuring the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0.", "### Staged training / annealing\n\n\nIn contrast to OLMo 1.0, we trained OLMo 1.7 with a two-stage curriculum:\n\n\n* In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2T tokens, when the learning rate is still high.\n* At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below.\nBoth stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 1.7 already outperforms OLMo 1.0. The second stage consistently adds 2 to 3 points of performance on top.", "### Architecture\n\n\nOLMo 7B architecture with peer models for comparison.", "### Hyperparameters\n\n\nAdamW optimizer parameters are shown below.\n\n\n\nOptimizer settings comparison with peer models.\n\n\n\nEnvironmental Impact\n--------------------\n\n\nOLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.\nA summary of the environmental impact. Further details are available in the paper.\n\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nLike any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.\nSuch content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.\n\n\nOtherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.\n\n\nBibTeX:\n\n\nAPA:\n\n\nGroeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.\n\n\nModel Card Contact\n------------------\n\n\nFor errors in this model card, contact Nathan, '{nathanl} at allenai dot org'." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Pavan178/finetuned8b
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T21:04:09+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<div style="display:flex;flex-direction:column;align-content:center;justify-content:center;"> <div style="text-align: center;"> <h1>Artigenz-Coder-DS-6.7B</h1> <p>Artigenz team intends to create family of code generation models that can run very fast on local computers.</p> <p>Artigenz-Coder-DS-6.7B is the first in this family with 6.7B parameters and <strong>13GB</strong> memory footprint 🌟</p> <a href="https://artigenz.github.io/artigenz">HomePage</a> </div> <div style="text-align: center;"> <h2 style="text-align: center;margin-top:40px">About the model</h2> <p>Artigenz-Coder-DS-6.7B was finetuned on DeepSeek-Coder-6.7B-Base. The dataset and scripts will be open-sourced soon.</p> <p>We have open sourced our model weights on 🤗 HF, checkout <a href="https://huggingface.co/Artigenz/Artigenz-Coder-DS-6.7B">here</a>!</p> </div> <h2 style="text-align: center;margin-top:40px">Team</h2> <div style="display: flex; justify-content: space-around; align-items: center; margin-left: 15%; margin-right: 15%;"> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/g4yzvf9/nikita.jpg" alt="Nikita Agarwal" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Nikita Agarwal</p> <a href="https://www.linkedin.com/in/nikita-agawal-iiith/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">AI Researcher</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">ex Data Scientist at Microsoft</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/ths81wc/vivek.jpg" alt="Vivek Verma" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Vivek Verma</p> <a href="https://www.linkedin.com/in/vivek-verma-bb9087238/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> <a href="https://scholar.google.com/citations?user=1b4qBFQAAAAJ&hl=en" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/LSZ8sHc/google-scholar-logo.png" alt="Google Scholar" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">Post Doctoral Associate</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Florida International Univesity</p> <p style="color: grey; font-size: 15px; margin-top: 0">202 Citations</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/XsmfPwX/nalin.jpg" alt="Nalin Abrol" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Nalin Abrol</p> <a href="https://www.linkedin.com/in/nalin-abrol-aa7211164/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">ex Software Engineer - Plivo <a href="https://www.ycombinator.com/companies/plivo" style="color:grey">(YC S21)</a></p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Published in OHBM 2019<a href="" style="color:grey">↗</a></p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> </div> <h3 style="text-align: center;margin-top:40px">Special Thanks ❤️</h3> <div style="display: flex; justify-content: space-around; align-items: center; margin-left: 15%; margin-right: 15%;"> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/SJBSZFf/Manish-Shrivastava.jpg" alt="Manish Srivastava" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Manish Shrivastava</p> <a href="https://www.linkedin.com/in/manishrivastava/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> <a href="https://www.iiit.ac.in/people/faculty/m.shrivastava/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/FJfHhSS/iiith.png" alt="University" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">Assistant Professor</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Natural Language Processing</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/qppJyFS/manas.png" alt="Manas Kumar Verma" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Manas Kumar Verma</p> <a href="https://www.linkedin.com/in/thenextmkv/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> <a href="https://www.ycombinator.com/companies/algouniversity" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/NKjFYvG/yc.png" alt="YC" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">CEO</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Algouniversity YC(S21)</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/r7s6KRR/nikhil.png" alt="Nikhil Tadigoppula" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Nikhil Tadigoppula</p> <a href="https://stats.ioinformatics.org/people/2800" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/1Zp7Lmm/ioi.png" alt="IOI" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">AI Researcher</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Bronze medalist</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">International Olympiad</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">in Informatics 2013</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> </div> <div style="text-align: center;"> <h2>What's Next ❓</h2> <p>The dataset and finetuing scripts used to train Artigenz-Coder-DS-6.7B will be released soon for the open-source-community to use freely. 🛠️.</p> <p>1B & 3B models from Artigenz family are on the roadmap next with long term goal to enable ⚡ fast local inference for code generation.</p> </div> <div style="text-align: center;"> <h2>Special Thanks to the Open Source Community ❤️</h2> <p>We extend our deepest gratitude to the open source community, especially the Bigcode Project, Magicoder, Hugging Face, DeepSeek, Wizard Coder, Code Llama that enabled research community to build powerfull LLMs.</p> <p>We need many more people to close the gap between proprietry and open source models and we are commited to contribute our bits to the goal.</p> </div> <div style="text-align: center;"> <h2>Get in Touch</h2> <p>You can reach out to us on LinkedIn or via email for any queries or collaborations! 😊</p> <div style="display: flex; justify-content: center; align-items: center; gap: 10px; margin-bottom: 20px;"> <a href="https://www.linkedin.com/in/nikita-agawal-iiith/" target="_blank"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 15px; height: 15px;"> <span style="font-size: 15px;">nikita-agawal-iiith</span> </a> </div> <div style="display: flex; justify-content: center; align-items: center; gap: 10px;"> <img src="https://i.ibb.co/4TgXkKw/email-icon.png" alt="Email" style="width: 15px; height: 15px;"> <span style="font-size: 15px;">[email protected]</span> </div> </div> </div>
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["code", "conversatinal"], "license_name": "deepseek", "license_link": "LICENSE"}
Artigenz/Artigenz-Coder-DS-6.7B
null
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversatinal", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T21:06:39+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #code #conversatinal #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<div style="display:flex;flex-direction:column;align-content:center;justify-content:center;"> <div style="text-align: center;"> <h1>Artigenz-Coder-DS-6.7B</h1> <p>Artigenz team intends to create family of code generation models that can run very fast on local computers.</p> <p>Artigenz-Coder-DS-6.7B is the first in this family with 6.7B parameters and <strong>13GB</strong> memory footprint </p> <a href="URL </div> <div style="text-align: center;"> <h2 style="text-align: center;margin-top:40px">About the model</h2> <p>Artigenz-Coder-DS-6.7B was finetuned on DeepSeek-Coder-6.7B-Base. The dataset and scripts will be open-sourced soon.</p> <p>We have open sourced our model weights on HF, checkout <a href="URL </div> <h2 style="text-align: center;margin-top:40px">Team</h2> <div style="display: flex; justify-content: space-around; align-items: center; margin-left: 15%; margin-right: 15%;"> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.URL alt="Nikita Agarwal" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Nikita Agarwal</p> <a href="URL target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.URL alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">AI Researcher</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">ex Data Scientist at Microsoft</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.URL alt="Vivek Verma" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Vivek Verma</p> <a href="URL target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.URL alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> <a href="URL target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.URL alt="Google Scholar" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">Post Doctoral Associate</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Florida International Univesity</p> <p style="color: grey; font-size: 15px; margin-top: 0">202 Citations</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.URL alt="Nalin Abrol" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Nalin Abrol</p> <a href="URL target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.URL alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">ex Software Engineer - Plivo <a href="URL style="color:grey">(YC S21)</a></p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Published in OHBM 2019<a href="" style="color:grey">↗</a></p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> </div> <h3 style="text-align: center;margin-top:40px">Special Thanks ️</h3> <div style="display: flex; justify-content: space-around; align-items: center; margin-left: 15%; margin-right: 15%;"> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.URL alt="Manish Srivastava" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Manish Shrivastava</p> <a href="URL target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.URL alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> <a href="URL target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.URL alt="University" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">Assistant Professor</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Natural Language Processing</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.URL alt="Manas Kumar Verma" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Manas Kumar Verma</p> <a href="URL target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.URL alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> <a href="URL target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.URL alt="YC" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">CEO</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Algouniversity YC(S21)</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.URL alt="Nikhil Tadigoppula" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Nikhil Tadigoppula</p> <a href="URL target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.URL alt="IOI" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">AI Researcher</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Bronze medalist</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">International Olympiad</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">in Informatics 2013</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> </div> <div style="text-align: center;"> <h2>What's Next </h2> <p>The dataset and finetuing scripts used to train Artigenz-Coder-DS-6.7B will be released soon for the open-source-community to use freely. ️.</p> <p>1B & 3B models from Artigenz family are on the roadmap next with long term goal to enable fast local inference for code generation.</p> </div> <div style="text-align: center;"> <h2>Special Thanks to the Open Source Community ️</h2> <p>We extend our deepest gratitude to the open source community, especially the Bigcode Project, Magicoder, Hugging Face, DeepSeek, Wizard Coder, Code Llama that enabled research community to build powerfull LLMs.</p> <p>We need many more people to close the gap between proprietry and open source models and we are commited to contribute our bits to the goal.</p> </div> <div style="text-align: center;"> <h2>Get in Touch</h2> <p>You can reach out to us on LinkedIn or via email for any queries or collaborations! </p> <div style="display: flex; justify-content: center; align-items: center; gap: 10px; margin-bottom: 20px;"> <a href="URL target="_blank"> <img src="https://i.URL alt="LinkedIn" style="width: 15px; height: 15px;"> <span style="font-size: 15px;">nikita-agawal-iiith</span> </a> </div> <div style="display: flex; justify-content: center; align-items: center; gap: 10px;"> <img src="https://i.URL alt="Email" style="width: 15px; height: 15px;"> <span style="font-size: 15px;">URL@URL</span> </div> </div> </div>
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #code #conversatinal #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_4096_512_46M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 3.6376 - F1 Score: 0.6652 - Accuracy: 0.6653 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:| | 0.3282 | 200.0 | 200 | 1.4435 | 0.7180 | 0.7197 | | 0.0405 | 400.0 | 400 | 1.7953 | 0.7153 | 0.7155 | | 0.0185 | 600.0 | 600 | 2.0938 | 0.7236 | 0.7238 | | 0.0098 | 800.0 | 800 | 2.3048 | 0.7067 | 0.7071 | | 0.0074 | 1000.0 | 1000 | 2.2920 | 0.7267 | 0.7280 | | 0.0048 | 1200.0 | 1200 | 2.4868 | 0.7150 | 0.7155 | | 0.0034 | 1400.0 | 1400 | 2.4898 | 0.7152 | 0.7155 | | 0.0034 | 1600.0 | 1600 | 2.5102 | 0.7113 | 0.7113 | | 0.0024 | 1800.0 | 1800 | 2.5634 | 0.7232 | 0.7238 | | 0.0024 | 2000.0 | 2000 | 2.5338 | 0.7256 | 0.7280 | | 0.0022 | 2200.0 | 2200 | 2.6650 | 0.7260 | 0.7280 | | 0.0018 | 2400.0 | 2400 | 2.7152 | 0.7104 | 0.7113 | | 0.0014 | 2600.0 | 2600 | 2.5620 | 0.7321 | 0.7322 | | 0.0015 | 2800.0 | 2800 | 2.4620 | 0.7238 | 0.7238 | | 0.0012 | 3000.0 | 3000 | 2.7872 | 0.7196 | 0.7197 | | 0.0012 | 3200.0 | 3200 | 2.7270 | 0.7273 | 0.7280 | | 0.0011 | 3400.0 | 3400 | 2.7114 | 0.7277 | 0.7280 | | 0.0011 | 3600.0 | 3600 | 2.8449 | 0.7197 | 0.7197 | | 0.0011 | 3800.0 | 3800 | 2.7638 | 0.7197 | 0.7197 | | 0.0009 | 4000.0 | 4000 | 2.7811 | 0.7071 | 0.7071 | | 0.0008 | 4200.0 | 4200 | 2.7689 | 0.7256 | 0.7280 | | 0.0008 | 4400.0 | 4400 | 2.8660 | 0.7154 | 0.7155 | | 0.0009 | 4600.0 | 4600 | 2.8599 | 0.7152 | 0.7155 | | 0.0007 | 4800.0 | 4800 | 2.8757 | 0.7314 | 0.7322 | | 0.0008 | 5000.0 | 5000 | 2.9983 | 0.7278 | 0.7280 | | 0.0007 | 5200.0 | 5200 | 2.9814 | 0.7234 | 0.7238 | | 0.0006 | 5400.0 | 5400 | 3.0309 | 0.7230 | 0.7238 | | 0.0005 | 5600.0 | 5600 | 3.0390 | 0.7237 | 0.7238 | | 0.0005 | 5800.0 | 5800 | 3.0822 | 0.7238 | 0.7238 | | 0.0004 | 6000.0 | 6000 | 3.2641 | 0.7348 | 0.7364 | | 0.0005 | 6200.0 | 6200 | 3.2479 | 0.7107 | 0.7113 | | 0.0006 | 6400.0 | 6400 | 2.9307 | 0.7237 | 0.7238 | | 0.0005 | 6600.0 | 6600 | 3.2046 | 0.7197 | 0.7197 | | 0.0004 | 6800.0 | 6800 | 3.1411 | 0.7280 | 0.7280 | | 0.0004 | 7000.0 | 7000 | 3.3117 | 0.7363 | 0.7364 | | 0.0003 | 7200.0 | 7200 | 3.4686 | 0.7279 | 0.7280 | | 0.0003 | 7400.0 | 7400 | 3.2235 | 0.7321 | 0.7322 | | 0.0003 | 7600.0 | 7600 | 3.1608 | 0.7357 | 0.7364 | | 0.0003 | 7800.0 | 7800 | 3.1914 | 0.7238 | 0.7238 | | 0.0003 | 8000.0 | 8000 | 3.2687 | 0.7238 | 0.7238 | | 0.0003 | 8200.0 | 8200 | 3.0126 | 0.7446 | 0.7448 | | 0.0003 | 8400.0 | 8400 | 3.1532 | 0.7280 | 0.7280 | | 0.0003 | 8600.0 | 8600 | 3.2199 | 0.7446 | 0.7448 | | 0.0002 | 8800.0 | 8800 | 3.3102 | 0.7322 | 0.7322 | | 0.0002 | 9000.0 | 9000 | 3.1773 | 0.7531 | 0.7531 | | 0.0002 | 9200.0 | 9200 | 3.2094 | 0.7444 | 0.7448 | | 0.0002 | 9400.0 | 9400 | 3.4284 | 0.7348 | 0.7364 | | 0.0001 | 9600.0 | 9600 | 3.3208 | 0.7403 | 0.7406 | | 0.0001 | 9800.0 | 9800 | 3.3515 | 0.7486 | 0.7490 | | 0.0001 | 10000.0 | 10000 | 3.3496 | 0.7444 | 0.7448 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_mouse_3-seqsight_4096_512_46M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_4096_512_46M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-15T21:08:06+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_mouse\_3-seqsight\_4096\_512\_46M-L32\_all =============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset. It achieves the following results on the evaluation set: * Loss: 3.6376 * F1 Score: 0.6652 * Accuracy: 0.6653 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
This is an ExLlamaV2 quantized model in 4bpw of [mpasila/SeaMax-7B](https://huggingface.co/mpasila/SeaMax-7B) using the default calibration dataset. # Original Model card: # SeaMax-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mpasila/PIPPA-Named-7B](https://huggingface.co/mpasila/PIPPA-Named-7B) as a base. ### Models Merged The following models were included in the merge: * [Locutusque/SlimHercules-4.0-Mistral-7B-v0.2](https://huggingface.co/Locutusque/SlimHercules-4.0-Mistral-7B-v0.2) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient merge_method: ties base_model: mpasila/PIPPA-Named-7B parameters: normalize: true int8_mask: true dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["mpasila/PIPPA-Named-7B", "Locutusque/SlimHercules-4.0-Mistral-7B-v0.2", "cognitivecomputations/dolphin-2.8-mistral-7b-v02"]}
mpasila/SeaMax-7B-exl2-4bpw
null
[ "transformers", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:mpasila/PIPPA-Named-7B", "base_model:Locutusque/SlimHercules-4.0-Mistral-7B-v0.2", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T21:13:36+00:00
[ "2306.01708" ]
[]
TAGS #transformers #mistral #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-mpasila/PIPPA-Named-7B #base_model-Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is an ExLlamaV2 quantized model in 4bpw of mpasila/SeaMax-7B using the default calibration dataset. # Original Model card: # SeaMax-7B This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the TIES merge method using mpasila/PIPPA-Named-7B as a base. ### Models Merged The following models were included in the merge: * Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 * cognitivecomputations/dolphin-2.8-mistral-7b-v02 ### Configuration The following YAML configuration was used to produce this model:
[ "# Original Model card:", "# SeaMax-7B\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the TIES merge method using mpasila/PIPPA-Named-7B as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* Locutusque/SlimHercules-4.0-Mistral-7B-v0.2\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #mistral #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-mpasila/PIPPA-Named-7B #base_model-Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Original Model card:", "# SeaMax-7B\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the TIES merge method using mpasila/PIPPA-Named-7B as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* Locutusque/SlimHercules-4.0-Mistral-7B-v0.2\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-pl-qlora This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrachat_200k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2400 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.18.0 - Tokenizers 0.15.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "alignment-handbook", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "zephyr-7b-pl-qlora", "results": []}]}
sengi/zephyr-7b-pl-qlora
null
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "region:us" ]
null
2024-04-15T21:16:07+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #mistral #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us
# zephyr-7b-pl-qlora This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the HuggingFaceH4/ultrachat_200k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2400 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.18.0 - Tokenizers 0.15.1
[ "# zephyr-7b-pl-qlora\n\nThis model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the HuggingFaceH4/ultrachat_200k dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 2400\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.37.2\n- Pytorch 2.2.0\n- Datasets 2.18.0\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #mistral #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us \n", "# zephyr-7b-pl-qlora\n\nThis model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the HuggingFaceH4/ultrachat_200k dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 2400\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.37.2\n- Pytorch 2.2.0\n- Datasets 2.18.0\n- Tokenizers 0.15.1" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0415MA3 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 60 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2761 | 0.09 | 10 | 1.1235 | | 0.5089 | 0.18 | 20 | 0.1178 | | 0.1173 | 0.27 | 30 | 0.0926 | | 0.0967 | 0.36 | 40 | 0.0792 | | 0.0823 | 0.45 | 50 | 0.0729 | | 0.084 | 0.54 | 60 | 0.0707 | | 0.0741 | 0.63 | 70 | 0.0692 | | 0.0736 | 0.73 | 80 | 0.0684 | | 0.0768 | 0.82 | 90 | 0.0616 | | 0.0742 | 0.91 | 100 | 0.0613 | | 0.0691 | 1.0 | 110 | 0.0641 | | 0.0586 | 1.09 | 120 | 0.0612 | | 0.0597 | 1.18 | 130 | 0.0597 | | 0.0543 | 1.27 | 140 | 0.0657 | | 0.0522 | 1.36 | 150 | 0.0591 | | 0.0591 | 1.45 | 160 | 0.0586 | | 0.0586 | 1.54 | 170 | 0.0585 | | 0.0571 | 1.63 | 180 | 0.0565 | | 0.0513 | 1.72 | 190 | 0.0597 | | 0.0603 | 1.81 | 200 | 0.0564 | | 0.0479 | 1.9 | 210 | 0.0575 | | 0.0486 | 1.99 | 220 | 0.0623 | | 0.0363 | 2.08 | 230 | 0.0591 | | 0.038 | 2.18 | 240 | 0.0613 | | 0.0347 | 2.27 | 250 | 0.0626 | | 0.0323 | 2.36 | 260 | 0.0640 | | 0.0372 | 2.45 | 270 | 0.0650 | | 0.0338 | 2.54 | 280 | 0.0643 | | 0.0326 | 2.63 | 290 | 0.0638 | | 0.0371 | 2.72 | 300 | 0.0627 | | 0.0384 | 2.81 | 310 | 0.0623 | | 0.0355 | 2.9 | 320 | 0.0622 | | 0.0391 | 2.99 | 330 | 0.0621 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0415MA3", "results": []}]}
Litzy619/V0415MA3
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-15T21:16:23+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
V0415MA3 ======== This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0621 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 60 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ast-finetuned-audioset-10-10-0.450_ESC50 This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.450](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.450) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2887 - Accuracy: 0.9275 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7745 | 0.99 | 66 | 2.3340 | 0.605 | | 0.7521 | 1.99 | 133 | 0.8978 | 0.8875 | | 0.2307 | 3.0 | 200 | 0.5545 | 0.8975 | | 0.0903 | 4.0 | 267 | 0.4063 | 0.925 | | 0.03 | 4.99 | 333 | 0.3488 | 0.92 | | 0.0123 | 5.99 | 400 | 0.2987 | 0.925 | | 0.0101 | 7.0 | 467 | 0.2887 | 0.9275 | | 0.0067 | 8.0 | 534 | 0.2808 | 0.9275 | | 0.0055 | 8.99 | 600 | 0.2784 | 0.9275 | | 0.0051 | 9.89 | 660 | 0.2778 | 0.9275 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "bsd-3-clause", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MIT/ast-finetuned-audioset-10-10-0.450", "model-index": [{"name": "ast-finetuned-audioset-10-10-0.450_ESC50", "results": []}]}
shreyahegde/ast-finetuned-audioset-10-10-0.450_ESC50
null
[ "transformers", "tensorboard", "safetensors", "audio-spectrogram-transformer", "audio-classification", "generated_from_trainer", "base_model:MIT/ast-finetuned-audioset-10-10-0.450", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:16:25+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #audio-spectrogram-transformer #audio-classification #generated_from_trainer #base_model-MIT/ast-finetuned-audioset-10-10-0.450 #license-bsd-3-clause #endpoints_compatible #region-us
ast-finetuned-audioset-10-10-0.450\_ESC50 ========================================= This model is a fine-tuned version of MIT/ast-finetuned-audioset-10-10-0.450 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2887 * Accuracy: 0.9275 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 6 * eval\_batch\_size: 6 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 24 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #audio-spectrogram-transformer #audio-classification #generated_from_trainer #base_model-MIT/ast-finetuned-audioset-10-10-0.450 #license-bsd-3-clause #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Hermes-2-Pro-Mistral-7B - model: WizardLM/WizardMath-7B-V1.1 merge_method: slerp base_model: NousResearch/Hermes-2-Pro-Mistral-7B dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Mistral-7B", "WizardLM/WizardMath-7B-V1.1"]}
mergekit-community/mergekit-slerp-tejngyg
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:WizardLM/WizardMath-7B-V1.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T21:16:57+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * NousResearch/Hermes-2-Pro-Mistral-7B * WizardLM/WizardMath-7B-V1.1 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0
{"library_name": "peft", "base_model": "NousResearch/Llama-2-7b-chat-hf"}
Sjbok/Llama_2_7B_PEFT_QLORA_V2
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-15T21:17:16+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-NousResearch/Llama-2-7b-chat-hf #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ## Training procedure The following 'bitsandbytes' quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.6.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-NousResearch/Llama-2-7b-chat-hf #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.6.0" ]
reinforcement-learning
stable-baselines3
# **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.27 +/- 0.09", "name": "mean_reward", "verified": false}]}]}]}
pdejong/a2c-PandaReachDense-v3
null
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-15T21:19:46+00:00
[]
[]
TAGS #stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# A2C Agent playing PandaReachDense-v3 This is a trained model of a A2C agent playing PandaReachDense-v3 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-mms-300m-ikk-3 This model is a fine-tuned version of [facebook/mms-300m](https://huggingface.co/facebook/mms-300m) on the audiofolder dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3763 - eval_wer: 0.5580 - eval_runtime: 6.8853 - eval_samples_per_second: 14.233 - eval_steps_per_second: 1.888 - epoch: 19.59 - step: 480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Step Training Loss Validation Loss Wer 40 9.648200 4.719201 1.000000 80 3.953400 3.477898 1.000000 120 3.289700 3.099611 1.000000 160 3.038400 2.993551 1.000000 200 2.994500 2.979574 1.000000 240 2.959000 2.941970 1.000000 280 2.802100 2.520133 1.000000 320 1.862100 1.499739 0.746423 360 1.191800 1.336315 0.610261 400 0.951300 1.317062 0.598915 440 0.773900 1.312918 0.614702 480 0.624700 1.376327 0.557967 /usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py:156: UserWarning: `as_target_processor` is deprecated and will be removed in v5 of Transformers. You can process your labels by using the argument `text` of the regular `__call__` method (either in the same call as your audio inputs, or in a separate call. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:460: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants. warnings.warn( /usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py:156: UserWarning: `as_target_processor` is deprecated and will be removed in v5 of Transformers. You can process your labels by using the argument `text` of the regular `__call__` method (either in the same call as your audio inputs, or in a separate call. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:460: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants. warnings.warn( ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "datasets": ["audiofolder"], "base_model": "facebook/mms-300m", "model-index": [{"name": "wav2vec2-mms-300m-ikk-3", "results": []}]}
ogbi/wav2vec2-mms-300m-ikk-3
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "base_model:facebook/mms-300m", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:21:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-audiofolder #base_model-facebook/mms-300m #license-cc-by-nc-4.0 #endpoints_compatible #region-us
# wav2vec2-mms-300m-ikk-3 This model is a fine-tuned version of facebook/mms-300m on the audiofolder dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3763 - eval_wer: 0.5580 - eval_runtime: 6.8853 - eval_samples_per_second: 14.233 - eval_steps_per_second: 1.888 - epoch: 19.59 - step: 480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Step Training Loss Validation Loss Wer 40 9.648200 4.719201 1.000000 80 3.953400 3.477898 1.000000 120 3.289700 3.099611 1.000000 160 3.038400 2.993551 1.000000 200 2.994500 2.979574 1.000000 240 2.959000 2.941970 1.000000 280 2.802100 2.520133 1.000000 320 1.862100 1.499739 0.746423 360 1.191800 1.336315 0.610261 400 0.951300 1.317062 0.598915 440 0.773900 1.312918 0.614702 480 0.624700 1.376327 0.557967 /usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py:156: UserWarning: 'as_target_processor' is deprecated and will be removed in v5 of Transformers. You can process your labels by using the argument 'text' of the regular '__call__' method (either in the same call as your audio inputs, or in a separate call. URL( /usr/local/lib/python3.10/dist-packages/torch/utils/URL: UserWarning: URL.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants. URL( /usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py:156: UserWarning: 'as_target_processor' is deprecated and will be removed in v5 of Transformers. You can process your labels by using the argument 'text' of the regular '__call__' method (either in the same call as your audio inputs, or in a separate call. URL( /usr/local/lib/python3.10/dist-packages/torch/utils/URL: UserWarning: URL.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants. URL( ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# wav2vec2-mms-300m-ikk-3\n\nThis model is a fine-tuned version of facebook/mms-300m on the audiofolder dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.3763\n- eval_wer: 0.5580\n- eval_runtime: 6.8853\n- eval_samples_per_second: 14.233\n- eval_steps_per_second: 1.888\n- epoch: 19.59\n- step: 480", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nStep\tTraining Loss\tValidation Loss\tWer\n40\t9.648200\t4.719201\t1.000000\n80\t3.953400\t3.477898\t1.000000\n120\t3.289700\t3.099611\t1.000000\n160\t3.038400\t2.993551\t1.000000\n200\t2.994500\t2.979574\t1.000000\n240\t2.959000\t2.941970\t1.000000\n280\t2.802100\t2.520133\t1.000000\n320\t1.862100\t1.499739\t0.746423\n360\t1.191800\t1.336315\t0.610261\n400\t0.951300\t1.317062\t0.598915\n440\t0.773900\t1.312918\t0.614702\n480\t0.624700\t1.376327\t0.557967\n\n/usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py:156: UserWarning: 'as_target_processor' is deprecated and will be removed in v5 of Transformers. You can process your labels by using the argument 'text' of the regular '__call__' method (either in the same call as your audio inputs, or in a separate call.\n URL(\n/usr/local/lib/python3.10/dist-packages/torch/utils/URL: UserWarning: URL.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\n URL(\n/usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py:156: UserWarning: 'as_target_processor' is deprecated and will be removed in v5 of Transformers. You can process your labels by using the argument 'text' of the regular '__call__' method (either in the same call as your audio inputs, or in a separate call.\n URL(\n/usr/local/lib/python3.10/dist-packages/torch/utils/URL: UserWarning: URL.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\n URL(", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-audiofolder #base_model-facebook/mms-300m #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n", "# wav2vec2-mms-300m-ikk-3\n\nThis model is a fine-tuned version of facebook/mms-300m on the audiofolder dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.3763\n- eval_wer: 0.5580\n- eval_runtime: 6.8853\n- eval_samples_per_second: 14.233\n- eval_steps_per_second: 1.888\n- epoch: 19.59\n- step: 480", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nStep\tTraining Loss\tValidation Loss\tWer\n40\t9.648200\t4.719201\t1.000000\n80\t3.953400\t3.477898\t1.000000\n120\t3.289700\t3.099611\t1.000000\n160\t3.038400\t2.993551\t1.000000\n200\t2.994500\t2.979574\t1.000000\n240\t2.959000\t2.941970\t1.000000\n280\t2.802100\t2.520133\t1.000000\n320\t1.862100\t1.499739\t0.746423\n360\t1.191800\t1.336315\t0.610261\n400\t0.951300\t1.317062\t0.598915\n440\t0.773900\t1.312918\t0.614702\n480\t0.624700\t1.376327\t0.557967\n\n/usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py:156: UserWarning: 'as_target_processor' is deprecated and will be removed in v5 of Transformers. You can process your labels by using the argument 'text' of the regular '__call__' method (either in the same call as your audio inputs, or in a separate call.\n URL(\n/usr/local/lib/python3.10/dist-packages/torch/utils/URL: UserWarning: URL.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\n URL(\n/usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py:156: UserWarning: 'as_target_processor' is deprecated and will be removed in v5 of Transformers. You can process your labels by using the argument 'text' of the regular '__call__' method (either in the same call as your audio inputs, or in a separate call.\n URL(\n/usr/local/lib/python3.10/dist-packages/torch/utils/URL: UserWarning: URL.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\n URL(", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
object-detection
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0882 | 1.0 | 1250 | 1.3492 | | 1.6047 | 2.0 | 2500 | 1.2964 | | 1.5492 | 3.0 | 3750 | 1.2105 | | 1.3223 | 4.0 | 5000 | 1.1513 | | 1.1328 | 5.0 | 6250 | 1.1143 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "detr", "results": []}]}
oskarkuuse/detr
null
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:21:35+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
detr ==== This model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.1143 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
# Model Card for mEdIT-xl The `medit-xl` model was obtained by fine-tuning the `MBZUAI/bactrian-x-llama-7b-lora` model on the mEdIT dataset. **Paper:** mEdIT: Multilingual Text Editing via Instruction Tuning **Authors:** Vipul Raheja, Dimitris Alikaniotis, Vivek Kulkarni, Bashar Alhafni, Dhruv Kumar ## Model Details ### Model Description - **Language(s) (NLP)**: Arabic, Chinese, English, German, Japanese, Korean, Spanish - **Finetuned from model:** `MBZUAI/bactrian-x-llama-7b-lora` ### Model Sources - **Repository:** https://github.com/vipulraheja/medit - **Paper:** https://arxiv.org/abs/2402.16472v1 ## How to use Given an edit instruction and an original text, our model can generate the edited version of the text.<br> ![task_specs](https://cdn-uploads.huggingface.co/production/uploads/60985a0547dc3dbf8a976607/816ZY2t0XPCpMMd6Z072K.png) Specifically, our models support both multi-lingual and cross-lingual text revision. Note that the input and output texts are always in the same language. The monolingual vs. cross-lingual setting is determined by comparing the language of the edit instruction in relation to the language of the input text. ### Instruction format Adherence to the following instruction format is essential; failure to do so may result in the model producing less-than-ideal results. ``` instruction_tokens = [ "Instruction", "Anweisung", ... ] input_tokens = [ "Input", "Aporte", ... ] output_tokens = [ "Output", "Produzione", ... ] task_descriptions = [ "Fix grammatical errors in this sentence", # <-- GEC task "Umschreiben Sie den Satz", # <-- Paraphrasing ... ] ``` **The entire list of possible instructions, input/output tokens, and task descriptions can be found in the Appendix of our paper.** ``` prompt_template = """### <instruction_token>:\n<task_description>\n### <input_token>:\n<input>\n### <output_token>:\n\n""" ``` Note that the tokens and the task description need not be in the language of the input (in the case of cross-lingual revision). ### Run the model ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "grammarly/medit-xl" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # English GEC using Japanese instructions prompt = '### 命令:\n文章を文法的にする\n### 入力:\nI has small cat ,\n### 出力:\n\n' inputs = tokenizer(prompt, return_tensors='pt') outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True) # --> I have a small cat , # German GEC using Japanese instructions prompt = '### 命令:\n文章を文法的にする\n### 入力:\nIch haben eines kleines Katze ,\n### 出力:\n\n' # ... # --> Ich habe eine kleine Katze , ``` #### Software https://github.com/vipulraheja/medit ## Citation **BibTeX:** ``` @article{raheja2023medit, title={mEdIT: mEdIT: Multilingual Text Editing via Instruction Tuning}, author={Vipul Raheja and Dimitris Alikaniotis and Vivek Kulkarni and Bashar Alhafni and Dhruv Kumar}, year={2024}, eprint={2402.16472v1}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **APA:** Raheja, V., Alikaniotis, D., Kulkarni, V., Alhafni, B., & Kumar, D. (2024). MEdIT: Multilingual Text Editing via Instruction Tuning. ArXiv. /abs/2402.16472
{"language": ["en", "de", "es", "ar", "ja", "ko", "zh"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "datasets": ["wi_locness", "matejklemen/falko_merlin", "paws", "paws-x", "asset"], "metrics": ["bleu", "rouge", "sari", "accuracy"], "widget": [{"text": "Umschreiben sie den satz: When I grow up, I start to understand what he said is quite right.", "example_title": "GEC (de|en)"}, {"text": "\ubb38\uc7a5\uc758 \uac04\ub2e8\ud55c \ubc84\uc804 \uc791\uc131: Cuando se pueden mantener tasas de flujo comparables, los resultados son altos.", "example_title": "Simplification (ko|es)"}, {"text": "Paraphrase this: \u3044\u3061\u3054\u306f\u7269\u8a9e\u3092\u7d39\u4ecb\u3057\u3001\u8aad\u8005\u3092\u30a4\u30d9\u30f3\u30c8\u306b\u5c0e\u304f\u3068\u5f7c\u306f\u8a00\u3063\u305f\u3002", "example_title": "Paraphrase (en|ja)"}], "pipeline_tag": "text2text-generation"}
grammarly/medit-xl
null
[ "transformers", "text2text-generation", "en", "de", "es", "ar", "ja", "ko", "zh", "dataset:wi_locness", "dataset:matejklemen/falko_merlin", "dataset:paws", "dataset:paws-x", "dataset:asset", "arxiv:2402.16472", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:26:24+00:00
[ "2402.16472" ]
[ "en", "de", "es", "ar", "ja", "ko", "zh" ]
TAGS #transformers #text2text-generation #en #de #es #ar #ja #ko #zh #dataset-wi_locness #dataset-matejklemen/falko_merlin #dataset-paws #dataset-paws-x #dataset-asset #arxiv-2402.16472 #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
# Model Card for mEdIT-xl The 'medit-xl' model was obtained by fine-tuning the 'MBZUAI/bactrian-x-llama-7b-lora' model on the mEdIT dataset. Paper: mEdIT: Multilingual Text Editing via Instruction Tuning Authors: Vipul Raheja, Dimitris Alikaniotis, Vivek Kulkarni, Bashar Alhafni, Dhruv Kumar ## Model Details ### Model Description - Language(s) (NLP): Arabic, Chinese, English, German, Japanese, Korean, Spanish - Finetuned from model: 'MBZUAI/bactrian-x-llama-7b-lora' ### Model Sources - Repository: URL - Paper: URL ## How to use Given an edit instruction and an original text, our model can generate the edited version of the text.<br> !task_specs Specifically, our models support both multi-lingual and cross-lingual text revision. Note that the input and output texts are always in the same language. The monolingual vs. cross-lingual setting is determined by comparing the language of the edit instruction in relation to the language of the input text. ### Instruction format Adherence to the following instruction format is essential; failure to do so may result in the model producing less-than-ideal results. The entire list of possible instructions, input/output tokens, and task descriptions can be found in the Appendix of our paper. Note that the tokens and the task description need not be in the language of the input (in the case of cross-lingual revision). ### Run the model #### Software URL BibTeX: APA: Raheja, V., Alikaniotis, D., Kulkarni, V., Alhafni, B., & Kumar, D. (2024). MEdIT: Multilingual Text Editing via Instruction Tuning. ArXiv. /abs/2402.16472
[ "# Model Card for mEdIT-xl\n\nThe 'medit-xl' model was obtained by fine-tuning the 'MBZUAI/bactrian-x-llama-7b-lora' model on the mEdIT dataset.\n\nPaper: mEdIT: Multilingual Text Editing via Instruction Tuning\n\nAuthors: Vipul Raheja, Dimitris Alikaniotis, Vivek Kulkarni, Bashar Alhafni, Dhruv Kumar", "## Model Details", "### Model Description\n\n- Language(s) (NLP): Arabic, Chinese, English, German, Japanese, Korean, Spanish\n- Finetuned from model: 'MBZUAI/bactrian-x-llama-7b-lora'", "### Model Sources\n\n- Repository: URL\n- Paper: URL", "## How to use\n\nGiven an edit instruction and an original text, our model can generate the edited version of the text.<br>\n\n!task_specs\n\nSpecifically, our models support both multi-lingual and cross-lingual text revision. Note that the input and output texts are always in the same language. The monolingual\nvs. cross-lingual setting is determined by comparing the language of the edit instruction in relation to the language of the input text.", "### Instruction format\n\nAdherence to the following instruction format is essential; failure to do so may result in the model producing less-than-ideal results.\n\n\n\nThe entire list of possible instructions, input/output tokens, and task descriptions can be found in the Appendix of our paper.\n\n\n\nNote that the tokens and the task description need not be in the language of the input (in the case of cross-lingual revision).", "### Run the model", "#### Software\nURL\n\nBibTeX:\n\n\nAPA:\nRaheja, V., Alikaniotis, D., Kulkarni, V., Alhafni, B., & Kumar, D. (2024). MEdIT: Multilingual Text Editing via Instruction Tuning. ArXiv. /abs/2402.16472" ]
[ "TAGS\n#transformers #text2text-generation #en #de #es #ar #ja #ko #zh #dataset-wi_locness #dataset-matejklemen/falko_merlin #dataset-paws #dataset-paws-x #dataset-asset #arxiv-2402.16472 #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n", "# Model Card for mEdIT-xl\n\nThe 'medit-xl' model was obtained by fine-tuning the 'MBZUAI/bactrian-x-llama-7b-lora' model on the mEdIT dataset.\n\nPaper: mEdIT: Multilingual Text Editing via Instruction Tuning\n\nAuthors: Vipul Raheja, Dimitris Alikaniotis, Vivek Kulkarni, Bashar Alhafni, Dhruv Kumar", "## Model Details", "### Model Description\n\n- Language(s) (NLP): Arabic, Chinese, English, German, Japanese, Korean, Spanish\n- Finetuned from model: 'MBZUAI/bactrian-x-llama-7b-lora'", "### Model Sources\n\n- Repository: URL\n- Paper: URL", "## How to use\n\nGiven an edit instruction and an original text, our model can generate the edited version of the text.<br>\n\n!task_specs\n\nSpecifically, our models support both multi-lingual and cross-lingual text revision. Note that the input and output texts are always in the same language. The monolingual\nvs. cross-lingual setting is determined by comparing the language of the edit instruction in relation to the language of the input text.", "### Instruction format\n\nAdherence to the following instruction format is essential; failure to do so may result in the model producing less-than-ideal results.\n\n\n\nThe entire list of possible instructions, input/output tokens, and task descriptions can be found in the Appendix of our paper.\n\n\n\nNote that the tokens and the task description need not be in the language of the input (in the case of cross-lingual revision).", "### Run the model", "#### Software\nURL\n\nBibTeX:\n\n\nAPA:\nRaheja, V., Alikaniotis, D., Kulkarni, V., Alhafni, B., & Kumar, D. (2024). MEdIT: Multilingual Text Editing via Instruction Tuning. ArXiv. /abs/2402.16472" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-subjqa-movies_2 This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "base_model": "deepset/roberta-base-squad2", "model-index": [{"name": "roberta-finetuned-subjqa-movies_2", "results": []}]}
mohamed13579/roberta-finetuned-subjqa-movies_2
null
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "base_model:deepset/roberta-base-squad2", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:26:49+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #question-answering #generated_from_trainer #base_model-deepset/roberta-base-squad2 #license-cc-by-4.0 #endpoints_compatible #region-us
# roberta-finetuned-subjqa-movies_2 This model is a fine-tuned version of deepset/roberta-base-squad2 on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# roberta-finetuned-subjqa-movies_2\n\nThis model is a fine-tuned version of deepset/roberta-base-squad2 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #question-answering #generated_from_trainer #base_model-deepset/roberta-base-squad2 #license-cc-by-4.0 #endpoints_compatible #region-us \n", "# roberta-finetuned-subjqa-movies_2\n\nThis model is a fine-tuned version of deepset/roberta-base-squad2 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_4096_512_46M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 2.2689 - F1 Score: 0.8064 - Accuracy: 0.8079 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.2979 | 100.0 | 200 | 1.1069 | 0.7284 | 0.7287 | | 0.0468 | 200.0 | 400 | 1.8541 | 0.7406 | 0.7439 | | 0.02 | 300.0 | 600 | 1.8279 | 0.7647 | 0.7652 | | 0.0101 | 400.0 | 800 | 2.1472 | 0.7603 | 0.7622 | | 0.0079 | 500.0 | 1000 | 2.0739 | 0.7760 | 0.7774 | | 0.0062 | 600.0 | 1200 | 1.9702 | 0.7796 | 0.7805 | | 0.0049 | 700.0 | 1400 | 2.0836 | 0.7794 | 0.7805 | | 0.0043 | 800.0 | 1600 | 1.9453 | 0.7804 | 0.7805 | | 0.0035 | 900.0 | 1800 | 2.1713 | 0.7799 | 0.7805 | | 0.0026 | 1000.0 | 2000 | 2.2365 | 0.7644 | 0.7652 | | 0.0028 | 1100.0 | 2200 | 2.0992 | 0.7679 | 0.7683 | | 0.0024 | 1200.0 | 2400 | 2.2926 | 0.7677 | 0.7683 | | 0.0023 | 1300.0 | 2600 | 1.9360 | 0.7835 | 0.7835 | | 0.0019 | 1400.0 | 2800 | 2.2408 | 0.7738 | 0.7744 | | 0.0015 | 1500.0 | 3000 | 2.5330 | 0.7769 | 0.7774 | | 0.0018 | 1600.0 | 3200 | 2.5514 | 0.7841 | 0.7866 | | 0.0015 | 1700.0 | 3400 | 2.4962 | 0.7707 | 0.7713 | | 0.0013 | 1800.0 | 3600 | 2.8557 | 0.7673 | 0.7683 | | 0.0012 | 1900.0 | 3800 | 2.6002 | 0.7673 | 0.7683 | | 0.0014 | 2000.0 | 4000 | 2.4081 | 0.7708 | 0.7713 | | 0.001 | 2100.0 | 4200 | 2.7194 | 0.7736 | 0.7744 | | 0.0011 | 2200.0 | 4400 | 2.4240 | 0.7711 | 0.7713 | | 0.0013 | 2300.0 | 4600 | 2.6158 | 0.7794 | 0.7805 | | 0.0009 | 2400.0 | 4800 | 2.9172 | 0.7740 | 0.7744 | | 0.0011 | 2500.0 | 5000 | 2.3413 | 0.7708 | 0.7713 | | 0.0009 | 2600.0 | 5200 | 2.8404 | 0.7769 | 0.7774 | | 0.0009 | 2700.0 | 5400 | 2.7277 | 0.7741 | 0.7744 | | 0.0008 | 2800.0 | 5600 | 2.6501 | 0.7831 | 0.7835 | | 0.0009 | 2900.0 | 5800 | 2.5986 | 0.7707 | 0.7713 | | 0.0007 | 3000.0 | 6000 | 2.8495 | 0.7802 | 0.7805 | | 0.0009 | 3100.0 | 6200 | 2.7719 | 0.7708 | 0.7713 | | 0.0005 | 3200.0 | 6400 | 2.8714 | 0.7771 | 0.7774 | | 0.0007 | 3300.0 | 6600 | 2.9542 | 0.7792 | 0.7805 | | 0.0006 | 3400.0 | 6800 | 2.8249 | 0.7762 | 0.7774 | | 0.0006 | 3500.0 | 7000 | 2.8867 | 0.7650 | 0.7652 | | 0.0005 | 3600.0 | 7200 | 2.8028 | 0.7738 | 0.7744 | | 0.0004 | 3700.0 | 7400 | 3.1408 | 0.7731 | 0.7744 | | 0.0002 | 3800.0 | 7600 | 3.1060 | 0.7738 | 0.7744 | | 0.0004 | 3900.0 | 7800 | 2.8467 | 0.7739 | 0.7744 | | 0.0004 | 4000.0 | 8000 | 3.0341 | 0.7762 | 0.7774 | | 0.0003 | 4100.0 | 8200 | 3.1643 | 0.7738 | 0.7744 | | 0.0004 | 4200.0 | 8400 | 2.7017 | 0.7742 | 0.7744 | | 0.0004 | 4300.0 | 8600 | 2.9451 | 0.7766 | 0.7774 | | 0.0003 | 4400.0 | 8800 | 3.0862 | 0.7796 | 0.7805 | | 0.0002 | 4500.0 | 9000 | 3.0645 | 0.7736 | 0.7744 | | 0.0003 | 4600.0 | 9200 | 2.9410 | 0.7766 | 0.7774 | | 0.0002 | 4700.0 | 9400 | 2.9397 | 0.7738 | 0.7744 | | 0.0002 | 4800.0 | 9600 | 2.9719 | 0.7768 | 0.7774 | | 0.0002 | 4900.0 | 9800 | 2.9873 | 0.7738 | 0.7744 | | 0.0001 | 5000.0 | 10000 | 3.0434 | 0.7738 | 0.7744 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_mouse_2-seqsight_4096_512_46M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_4096_512_46M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-15T21:28:21+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_mouse\_2-seqsight\_4096\_512\_46M-L32\_all =============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset. It achieves the following results on the evaluation set: * Loss: 2.2689 * F1 Score: 0.8064 * Accuracy: 0.8079 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
shaswatamitra/llama2-7b-chat-hf-finetuned3
null
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:36:52+00:00
[]
[]
TAGS #transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit AutoTrain. # Usage
[ "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ "TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
dmusingu/WHISPER-LARGE-SWAHILI-ASR-CV-14
null
[ "transformers", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:37:09+00:00
[ "1910.09700" ]
[]
TAGS #transformers #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mist-7b-sft-29k-comments This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 4 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mist-7b-sft-29k-comments", "results": []}]}
sanps/mist-7b-sft-29k-comments
null
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T21:37:56+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# mist-7b-sft-29k-comments This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 4 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "# mist-7b-sft-29k-comments\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 4\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.35.0\n- Pytorch 2.1.0+cu118\n- Datasets 2.14.6\n- Tokenizers 0.14.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# mist-7b-sft-29k-comments\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 4\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.35.0\n- Pytorch 2.1.0+cu118\n- Datasets 2.14.6\n- Tokenizers 0.14.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_4096_512_46M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.7231 - F1 Score: 0.7034 - Accuracy: 0.7063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1536 - eval_batch_size: 1536 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.9287 | 8.33 | 200 | 0.7805 | 0.6255 | 0.6427 | | 0.7366 | 16.67 | 400 | 0.7225 | 0.6693 | 0.6712 | | 0.6672 | 25.0 | 600 | 0.6967 | 0.6906 | 0.6905 | | 0.6279 | 33.33 | 800 | 0.6820 | 0.6943 | 0.6997 | | 0.5958 | 41.67 | 1000 | 0.6874 | 0.7021 | 0.7041 | | 0.5713 | 50.0 | 1200 | 0.6880 | 0.7019 | 0.7028 | | 0.5471 | 58.33 | 1400 | 0.6999 | 0.7030 | 0.7045 | | 0.5243 | 66.67 | 1600 | 0.7067 | 0.7059 | 0.7076 | | 0.5023 | 75.0 | 1800 | 0.7213 | 0.7105 | 0.7122 | | 0.4796 | 83.33 | 2000 | 0.7262 | 0.7012 | 0.6999 | | 0.4578 | 91.67 | 2200 | 0.7641 | 0.7071 | 0.7082 | | 0.4388 | 100.0 | 2400 | 0.7542 | 0.7028 | 0.7030 | | 0.4149 | 108.33 | 2600 | 0.7753 | 0.7027 | 0.7074 | | 0.3968 | 116.67 | 2800 | 0.7969 | 0.6912 | 0.6903 | | 0.3793 | 125.0 | 3000 | 0.8172 | 0.6976 | 0.6988 | | 0.3595 | 133.33 | 3200 | 0.8242 | 0.6965 | 0.6947 | | 0.3416 | 141.67 | 3400 | 0.8378 | 0.7068 | 0.7080 | | 0.3311 | 150.0 | 3600 | 0.8577 | 0.7047 | 0.7078 | | 0.3136 | 158.33 | 3800 | 0.8809 | 0.7002 | 0.6997 | | 0.3002 | 166.67 | 4000 | 0.8818 | 0.7040 | 0.7054 | | 0.2881 | 175.0 | 4200 | 0.9021 | 0.7045 | 0.7065 | | 0.2782 | 183.33 | 4400 | 0.9051 | 0.7036 | 0.7043 | | 0.2649 | 191.67 | 4600 | 0.9085 | 0.7037 | 0.7065 | | 0.2583 | 200.0 | 4800 | 0.9314 | 0.7014 | 0.7036 | | 0.2489 | 208.33 | 5000 | 0.9270 | 0.7005 | 0.7004 | | 0.2389 | 216.67 | 5200 | 0.9608 | 0.7000 | 0.7021 | | 0.2307 | 225.0 | 5400 | 0.9725 | 0.7051 | 0.7069 | | 0.2225 | 233.33 | 5600 | 0.9891 | 0.6981 | 0.6993 | | 0.2165 | 241.67 | 5800 | 0.9782 | 0.7003 | 0.7017 | | 0.212 | 250.0 | 6000 | 1.0163 | 0.7045 | 0.7069 | | 0.2042 | 258.33 | 6200 | 0.9938 | 0.6964 | 0.6964 | | 0.198 | 266.67 | 6400 | 1.0095 | 0.7004 | 0.7008 | | 0.1955 | 275.0 | 6600 | 1.0171 | 0.7035 | 0.7050 | | 0.1895 | 283.33 | 6800 | 1.0301 | 0.7039 | 0.7056 | | 0.1846 | 291.67 | 7000 | 1.0309 | 0.6962 | 0.6975 | | 0.1802 | 300.0 | 7200 | 1.0419 | 0.7016 | 0.7028 | | 0.1793 | 308.33 | 7400 | 1.0345 | 0.7001 | 0.7012 | | 0.1738 | 316.67 | 7600 | 1.0522 | 0.6994 | 0.7006 | | 0.1713 | 325.0 | 7800 | 1.0432 | 0.7005 | 0.7014 | | 0.1689 | 333.33 | 8000 | 1.0446 | 0.6982 | 0.6984 | | 0.1659 | 341.67 | 8200 | 1.0530 | 0.6997 | 0.7006 | | 0.1624 | 350.0 | 8400 | 1.0648 | 0.7004 | 0.7023 | | 0.16 | 358.33 | 8600 | 1.0741 | 0.7004 | 0.7014 | | 0.1583 | 366.67 | 8800 | 1.0657 | 0.6998 | 0.7001 | | 0.1559 | 375.0 | 9000 | 1.0638 | 0.7038 | 0.7047 | | 0.1553 | 383.33 | 9200 | 1.0733 | 0.7006 | 0.7025 | | 0.1529 | 391.67 | 9400 | 1.0754 | 0.7010 | 0.7019 | | 0.1514 | 400.0 | 9600 | 1.0805 | 0.6995 | 0.7004 | | 0.1505 | 408.33 | 9800 | 1.0792 | 0.6991 | 0.7001 | | 0.149 | 416.67 | 10000 | 1.0819 | 0.6988 | 0.6999 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_4096_512_46M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_4096_512_46M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-15T21:38:12+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_splice\_reconstructed-seqsight\_4096\_512\_46M-L32\_all ============================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset. It achieves the following results on the evaluation set: * Loss: 0.7231 * F1 Score: 0.7034 * Accuracy: 0.7063 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 1536 * eval\_batch\_size: 1536 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_4096_512_46M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.5182 - F1 Score: 0.7501 - Accuracy: 0.751 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6027 | 12.5 | 200 | 0.5984 | 0.7034 | 0.704 | | 0.5178 | 25.0 | 400 | 0.5927 | 0.7160 | 0.716 | | 0.4842 | 37.5 | 600 | 0.5973 | 0.7151 | 0.715 | | 0.4529 | 50.0 | 800 | 0.6029 | 0.7039 | 0.704 | | 0.423 | 62.5 | 1000 | 0.6489 | 0.7094 | 0.71 | | 0.3947 | 75.0 | 1200 | 0.6742 | 0.7089 | 0.709 | | 0.3668 | 87.5 | 1400 | 0.6916 | 0.7085 | 0.709 | | 0.3395 | 100.0 | 1600 | 0.6863 | 0.6906 | 0.691 | | 0.317 | 112.5 | 1800 | 0.7582 | 0.6990 | 0.699 | | 0.2938 | 125.0 | 2000 | 0.7515 | 0.6944 | 0.696 | | 0.2737 | 137.5 | 2200 | 0.8240 | 0.6861 | 0.686 | | 0.256 | 150.0 | 2400 | 0.8041 | 0.6880 | 0.688 | | 0.2379 | 162.5 | 2600 | 0.8410 | 0.6959 | 0.696 | | 0.2226 | 175.0 | 2800 | 0.8380 | 0.6768 | 0.677 | | 0.2114 | 187.5 | 3000 | 0.8642 | 0.6849 | 0.685 | | 0.1989 | 200.0 | 3200 | 0.8808 | 0.6958 | 0.696 | | 0.188 | 212.5 | 3400 | 0.8635 | 0.6901 | 0.69 | | 0.1771 | 225.0 | 3600 | 0.9319 | 0.6921 | 0.692 | | 0.1683 | 237.5 | 3800 | 0.9423 | 0.6960 | 0.696 | | 0.1593 | 250.0 | 4000 | 0.9104 | 0.6990 | 0.699 | | 0.151 | 262.5 | 4200 | 0.9543 | 0.6981 | 0.698 | | 0.1457 | 275.0 | 4400 | 0.9615 | 0.6921 | 0.692 | | 0.139 | 287.5 | 4600 | 0.9599 | 0.6891 | 0.689 | | 0.132 | 300.0 | 4800 | 1.0306 | 0.6989 | 0.699 | | 0.1276 | 312.5 | 5000 | 1.0203 | 0.6910 | 0.691 | | 0.1225 | 325.0 | 5200 | 1.0006 | 0.6961 | 0.696 | | 0.1167 | 337.5 | 5400 | 1.0352 | 0.6909 | 0.691 | | 0.1126 | 350.0 | 5600 | 1.0325 | 0.7071 | 0.707 | | 0.1092 | 362.5 | 5800 | 1.0328 | 0.6941 | 0.694 | | 0.1057 | 375.0 | 6000 | 1.0376 | 0.6921 | 0.692 | | 0.1022 | 387.5 | 6200 | 1.0457 | 0.6881 | 0.688 | | 0.0985 | 400.0 | 6400 | 1.0657 | 0.7021 | 0.702 | | 0.0948 | 412.5 | 6600 | 1.0836 | 0.6931 | 0.693 | | 0.0937 | 425.0 | 6800 | 1.0806 | 0.7001 | 0.7 | | 0.0899 | 437.5 | 7000 | 1.1024 | 0.6861 | 0.686 | | 0.088 | 450.0 | 7200 | 1.1138 | 0.6916 | 0.692 | | 0.0865 | 462.5 | 7400 | 1.1193 | 0.6911 | 0.691 | | 0.0839 | 475.0 | 7600 | 1.1127 | 0.6909 | 0.691 | | 0.0822 | 487.5 | 7800 | 1.1370 | 0.6909 | 0.691 | | 0.0812 | 500.0 | 8000 | 1.1088 | 0.6950 | 0.695 | | 0.0796 | 512.5 | 8200 | 1.1048 | 0.6911 | 0.691 | | 0.0772 | 525.0 | 8400 | 1.1572 | 0.6931 | 0.693 | | 0.0763 | 537.5 | 8600 | 1.1530 | 0.6951 | 0.695 | | 0.0749 | 550.0 | 8800 | 1.1517 | 0.6980 | 0.698 | | 0.0739 | 562.5 | 9000 | 1.1435 | 0.7061 | 0.706 | | 0.0729 | 575.0 | 9200 | 1.1616 | 0.6950 | 0.695 | | 0.0714 | 587.5 | 9400 | 1.1535 | 0.6981 | 0.698 | | 0.0712 | 600.0 | 9600 | 1.1519 | 0.7001 | 0.7 | | 0.0708 | 612.5 | 9800 | 1.1583 | 0.6981 | 0.698 | | 0.07 | 625.0 | 10000 | 1.1547 | 0.6991 | 0.699 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_0-seqsight_4096_512_46M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_4096_512_46M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-15T21:42:03+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_tf\_0-seqsight\_4096\_512\_46M-L32\_all ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.5182 * F1 Score: 0.7501 * Accuracy: 0.751 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: eulpicard/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
eulpicard/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-15T21:42:17+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: eulpicard/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: eulpicard/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: eulpicard/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
shaswatamitra/mistral-7b-v2-finetuned3
null
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:43:52+00:00
[]
[]
TAGS #transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit AutoTrain. # Usage
[ "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ "TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yongzx/my-awesome-model
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T21:44:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
gubartz/facetsum-m-2048
null
[ "transformers", "safetensors", "longt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:46:07+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #longt5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #longt5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ruBert-base-sberquad-0.001-len_2-filtered-v2 This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.001-len_2-filtered-v2", "results": []}]}
Shalazary/ruBert-base-sberquad-0.001-len_2-filtered-v2
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-base", "license:apache-2.0", "region:us" ]
null
2024-04-15T21:47:25+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
# ruBert-base-sberquad-0.001-len_2-filtered-v2 This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# ruBert-base-sberquad-0.001-len_2-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n", "# ruBert-base-sberquad-0.001-len_2-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yongzx/Llama-2-7b-hf__llama2-zh-qlora__checkpoint-18500
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T21:49:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
guoyu-zhang/hh
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:50:24+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_4096_512_46M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.6189 - F1 Score: 0.7655 - Accuracy: 0.766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6092 | 13.33 | 200 | 0.5896 | 0.6856 | 0.687 | | 0.5251 | 26.67 | 400 | 0.6077 | 0.6943 | 0.695 | | 0.4911 | 40.0 | 600 | 0.6385 | 0.6854 | 0.686 | | 0.456 | 53.33 | 800 | 0.6225 | 0.6892 | 0.693 | | 0.4285 | 66.67 | 1000 | 0.6566 | 0.6959 | 0.696 | | 0.4018 | 80.0 | 1200 | 0.6735 | 0.6960 | 0.696 | | 0.3754 | 93.33 | 1400 | 0.6929 | 0.6901 | 0.691 | | 0.3513 | 106.67 | 1600 | 0.7154 | 0.6946 | 0.695 | | 0.327 | 120.0 | 1800 | 0.7073 | 0.6940 | 0.694 | | 0.3043 | 133.33 | 2000 | 0.7751 | 0.6925 | 0.694 | | 0.2846 | 146.67 | 2200 | 0.7938 | 0.6918 | 0.692 | | 0.2645 | 160.0 | 2400 | 0.8680 | 0.6989 | 0.699 | | 0.2484 | 173.33 | 2600 | 0.8607 | 0.6736 | 0.675 | | 0.2324 | 186.67 | 2800 | 0.8500 | 0.6830 | 0.683 | | 0.2181 | 200.0 | 3000 | 0.8959 | 0.6959 | 0.696 | | 0.2052 | 213.33 | 3200 | 0.9204 | 0.6807 | 0.682 | | 0.194 | 226.67 | 3400 | 0.9278 | 0.6779 | 0.678 | | 0.183 | 240.0 | 3600 | 0.9534 | 0.6869 | 0.687 | | 0.1741 | 253.33 | 3800 | 0.9607 | 0.6769 | 0.677 | | 0.1647 | 266.67 | 4000 | 1.0094 | 0.6780 | 0.678 | | 0.1557 | 280.0 | 4200 | 1.0197 | 0.6737 | 0.674 | | 0.1487 | 293.33 | 4400 | 1.0663 | 0.6753 | 0.676 | | 0.1413 | 306.67 | 4600 | 1.0563 | 0.6840 | 0.684 | | 0.1364 | 320.0 | 4800 | 1.0588 | 0.6766 | 0.677 | | 0.1289 | 333.33 | 5000 | 1.0702 | 0.6837 | 0.684 | | 0.1246 | 346.67 | 5200 | 1.0804 | 0.6850 | 0.685 | | 0.1186 | 360.0 | 5400 | 1.1209 | 0.6750 | 0.676 | | 0.1146 | 373.33 | 5600 | 1.1145 | 0.6787 | 0.679 | | 0.1109 | 386.67 | 5800 | 1.1222 | 0.6768 | 0.678 | | 0.1071 | 400.0 | 6000 | 1.0662 | 0.6809 | 0.681 | | 0.1019 | 413.33 | 6200 | 1.1542 | 0.6814 | 0.682 | | 0.0988 | 426.67 | 6400 | 1.1403 | 0.6762 | 0.677 | | 0.0961 | 440.0 | 6600 | 1.1969 | 0.6785 | 0.679 | | 0.093 | 453.33 | 6800 | 1.1861 | 0.6736 | 0.674 | | 0.091 | 466.67 | 7000 | 1.1575 | 0.6868 | 0.687 | | 0.0892 | 480.0 | 7200 | 1.1656 | 0.6793 | 0.68 | | 0.0863 | 493.33 | 7400 | 1.1857 | 0.6821 | 0.683 | | 0.0842 | 506.67 | 7600 | 1.1549 | 0.6877 | 0.688 | | 0.0825 | 520.0 | 7800 | 1.2023 | 0.6760 | 0.677 | | 0.0795 | 533.33 | 8000 | 1.1960 | 0.6834 | 0.684 | | 0.0791 | 546.67 | 8200 | 1.2143 | 0.6813 | 0.682 | | 0.077 | 560.0 | 8400 | 1.2029 | 0.6834 | 0.684 | | 0.0755 | 573.33 | 8600 | 1.2134 | 0.6795 | 0.68 | | 0.0745 | 586.67 | 8800 | 1.2086 | 0.6818 | 0.682 | | 0.0733 | 600.0 | 9000 | 1.2249 | 0.6787 | 0.679 | | 0.072 | 613.33 | 9200 | 1.2278 | 0.6816 | 0.682 | | 0.0723 | 626.67 | 9400 | 1.2215 | 0.6814 | 0.682 | | 0.0711 | 640.0 | 9600 | 1.2281 | 0.6846 | 0.685 | | 0.0702 | 653.33 | 9800 | 1.2360 | 0.6774 | 0.678 | | 0.0694 | 666.67 | 10000 | 1.2397 | 0.6815 | 0.682 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_1-seqsight_4096_512_46M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_4096_512_46M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-15T21:50:30+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_tf\_1-seqsight\_4096\_512\_46M-L32\_all ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.6189 * F1 Score: 0.7655 * Accuracy: 0.766 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
shaswatamitra/aimaven-prometheus-finetuned3
null
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:50:36+00:00
[]
[]
TAGS #transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit AutoTrain. # Usage
[ "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ "TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ruBert-base-sberquad-0.001-len_4-filtered This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.001-len_4-filtered", "results": []}]}
Shalazary/ruBert-base-sberquad-0.001-len_4-filtered
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-base", "license:apache-2.0", "region:us" ]
null
2024-04-15T21:51:13+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
# ruBert-base-sberquad-0.001-len_4-filtered This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# ruBert-base-sberquad-0.001-len_4-filtered\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n", "# ruBert-base-sberquad-0.001-len_4-filtered\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text2text-generation
transformers
# Please use du-kang/custom5-2e!
{"license": "mit"}
du-kang/custom5
null
[ "transformers", "pytorch", "t5", "text2text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T21:51:58+00:00
[]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Please use du-kang/custom5-2e!
[ "# Please use du-kang/custom5-2e!" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Please use du-kang/custom5-2e!" ]
text-classification
setfit
# SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | negative | <ul><li>'there might be some sort of credible gender-provoking philosophy submerged here , but who the hell cares ?'</li><li>'represents the depths to which the girls-behaving-badly film has fallen .'</li><li>'-lrb- a -rrb- crushing disappointment .'</li></ul> | | positive | <ul><li>'what saves it ... and makes it one of the better video-game-based flicks , is that the film acknowledges upfront that the plot makes no sense , such that the lack of linearity is the point of emotional and moral departure for protagonist alice .'</li><li>'but it could be , by its art and heart , a necessary one .'</li><li>'a culture-clash comedy that , in addition to being very funny , captures some of the discomfort and embarrassment of being a bumbling american in europe .'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8506 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Alex-Yuchen/setfit-bge-small-v1.5-sst2-8-shot") # Run inference preds = model("it 's refreshing to see a romance this smart .") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 6 | 22.5 | 45 | | Label | Training Sample Count | |:---------|:----------------------| | negative | 8 | | positive | 8 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (10, 10) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-----:|:----:|:-------------:|:---------------:| | 0.2 | 1 | 0.2087 | - | | 10.0 | 50 | 0.0083 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.6.1 - Transformers: 4.38.2 - PyTorch: 2.2.1+cu121 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "widget": [{"text": "amy and matthew have a bit of a phony relationship , but the film works in spite of it ."}, {"text": "it 's refreshing to see a romance this smart ."}, {"text": "bogdanich is unashamedly pro-serbian and makes little attempt to give voice to the other side ."}, {"text": "sayles has an eye for the ways people of different ethnicities talk to and about others outside the group ."}, {"text": "eddie murphy and owen wilson have a cute partnership in i spy , but the movie around them is so often nearly nothing that their charm does n't do a load of good ."}], "pipeline_tag": "text-classification", "inference": true, "base_model": "BAAI/bge-small-en-v1.5", "model-index": [{"name": "SetFit with BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8506315211422295, "name": "Accuracy"}]}]}]}
Alex-Yuchen/setfit-bge-small-v1.5-sst2-8-shot
null
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-small-en-v1.5", "model-index", "region:us" ]
null
2024-04-15T21:53:34+00:00
[ "2209.11055" ]
[]
TAGS #setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-BAAI/bge-small-en-v1.5 #model-index #region-us
SetFit with BAAI/bge-small-en-v1.5 ================================== This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-small-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a Sentence Transformer with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. Model Details ------------- ### Model Description * Model Type: SetFit * Sentence Transformer body: BAAI/bge-small-en-v1.5 * Classification head: a LogisticRegression instance * Maximum Sequence Length: 512 tokens * Number of Classes: 2 classes ### Model Sources * Repository: SetFit on GitHub * Paper: Efficient Few-Shot Learning Without Prompts * Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts ### Model Labels Evaluation ---------- ### Metrics Uses ---- ### Direct Use for Inference First install the SetFit library: Then you can load this model and run inference. Training Details ---------------- ### Training Set Metrics ### Training Hyperparameters * batch\_size: (32, 32) * num\_epochs: (10, 10) * max\_steps: -1 * sampling\_strategy: oversampling * body\_learning\_rate: (2e-05, 1e-05) * head\_learning\_rate: 0.01 * loss: CosineSimilarityLoss * distance\_metric: cosine\_distance * margin: 0.25 * end\_to\_end: False * use\_amp: False * warmup\_proportion: 0.1 * seed: 42 * eval\_max\_steps: -1 * load\_best\_model\_at\_end: False ### Training Results ### Framework Versions * Python: 3.10.12 * SetFit: 1.0.3 * Sentence Transformers: 2.6.1 * Transformers: 4.38.2 * PyTorch: 2.2.1+cu121 * Datasets: 2.18.0 * Tokenizers: 0.15.2 ### BibTeX
[ "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: BAAI/bge-small-en-v1.5\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 2 classes", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts", "### Model Labels\n\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (32, 32)\n* num\\_epochs: (10, 10)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False", "### Training Results", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.6.1\n* Transformers: 4.38.2\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2", "### BibTeX" ]
[ "TAGS\n#setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-BAAI/bge-small-en-v1.5 #model-index #region-us \n", "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: BAAI/bge-small-en-v1.5\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 2 classes", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts", "### Model Labels\n\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (32, 32)\n* num\\_epochs: (10, 10)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False", "### Training Results", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.6.1\n* Transformers: 4.38.2\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2", "### BibTeX" ]
text-generation
transformers
# Model Card for Model ID Orig name: jmodel/Llama-2-7b-hf__llama2-zh-qlora__checkpoint-18500 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jmodel/Llama-2-7b-zh-qlora-ckpt18500
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T21:53:58+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID Orig name: jmodel/Llama-2-7b-hf__llama2-zh-qlora__checkpoint-18500 ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID\nOrig name: jmodel/Llama-2-7b-hf__llama2-zh-qlora__checkpoint-18500", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID\nOrig name: jmodel/Llama-2-7b-hf__llama2-zh-qlora__checkpoint-18500", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# WizardLM-2-8x22B - EXL2 2.5bpw This is a 2.5bpw EXL2 quant of [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B) Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. | Quant Level | Perplexity Score | |-------------|------------------| | 7.0 | 4.5859 | | 6.0 | 4.6252 | | 5.5 | 4.6493 | | 5.0 | 4.6937 | | 4.5 | 4.8029 | | 4.0 | 4.9372 | | 3.5 | 5.1336 | | 3.25 | 5.3636 | | 3.0 | 5.5468 | | 2.75 | 5.8255 | | 2.5 | 6.3362 | | 2.25 | 7.7763 | ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 DATA_SET=/root/wikitext/wikitext-2-v1.parquet # Set the model name and bit size MODEL_NAME="WizardLM-2-8x22B" BIT_PRECISIONS=(6.0 5.5 5.0 4.5 4.0 3.5 3.25 3.0 2.75 2.5 2.25) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do LOCAL_FOLDER="/root/models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" REMOTE_FOLDER="Dracones/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" if [ ! -d "$LOCAL_FOLDER" ]; then huggingface-cli download --local-dir-use-symlinks=False --local-dir "${LOCAL_FOLDER}" "${REMOTE_FOLDER}" >> /root/download.log 2>&1 fi output=$(python test_inference.py -m "$LOCAL_FOLDER" -gs 40,40,40,40 -ed "$DATA_SET") score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" # rm -rf "${LOCAL_FOLDER}" done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="WizardLM-2-8x22B" # Define variables MODEL_DIR="/mnt/storage/models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(2.25) BIT_PRECISIONS=(5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "base_model": "microsoft/WizardLM-2-8x22B"}
Dracones/WizardLM-2-8x22B_exl2_2.5bpw
null
[ "transformers", "safetensors", "mixtral", "text-generation", "exl2", "en", "base_model:microsoft/WizardLM-2-8x22B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-15T21:56:05+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
WizardLM-2-8x22B - EXL2 2.5bpw ============================== This is a 2.5bpw EXL2 quant of microsoft/WizardLM-2-8x22B Details about the model can be found at the above model page. EXL2 Version ------------ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. Perplexity Scoring ------------------ Below are the perplexity scores for the EXL2 models. A lower score is better. ### Perplexity Script This was the script used for perplexity testing. Quant Details ------------- This is the script used for quantization.
[ "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
shaswatamitra/westlake-finetuned3
null
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:57:01+00:00
[]
[]
TAGS #transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit AutoTrain. # Usage
[ "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ "TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/allknowingroger/MultiverseMath-12B-MoE <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.IQ3_M.gguf) | IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.Q5_K_S.gguf) | Q5_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.Q5_K_M.gguf) | Q5_K_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.Q6_K.gguf) | Q6_K | 10.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MultiverseMath-12B-MoE-GGUF/resolve/main/MultiverseMath-12B-MoE.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/MultiverseEx26-7B-slerp", "DT12the/Math-Mixtral-7B"], "base_model": "allknowingroger/MultiverseMath-12B-MoE", "quantized_by": "mradermacher"}
mradermacher/MultiverseMath-12B-MoE-GGUF
null
[ "transformers", "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/MultiverseEx26-7B-slerp", "DT12the/Math-Mixtral-7B", "en", "base_model:allknowingroger/MultiverseMath-12B-MoE", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:57:57+00:00
[]
[ "en" ]
TAGS #transformers #gguf #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #DT12the/Math-Mixtral-7B #en #base_model-allknowingroger/MultiverseMath-12B-MoE #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #DT12the/Math-Mixtral-7B #en #base_model-allknowingroger/MultiverseMath-12B-MoE #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lucyd/mistral_instruct
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:58:58+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Grayx/sad_pepe_26
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T21:59:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
shaswatamitra/westseverus-finetuned3
null
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-15T22:03:47+00:00
[]
[]
TAGS #transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit AutoTrain. # Usage
[ "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ "TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
text-generation
transformers
# WizardLM-2-8x22B - EXL2 4.5bpw This is a 4.5bpw EXL2 quant of [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B) Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. | Quant Level | Perplexity Score | |-------------|------------------| | 7.0 | 4.5859 | | 6.0 | 4.6252 | | 5.5 | 4.6493 | | 5.0 | 4.6937 | | 4.5 | 4.8029 | | 4.0 | 4.9372 | | 3.5 | 5.1336 | | 3.25 | 5.3636 | | 3.0 | 5.5468 | | 2.75 | 5.8255 | | 2.5 | 6.3362 | | 2.25 | 7.7763 | ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 DATA_SET=/root/wikitext/wikitext-2-v1.parquet # Set the model name and bit size MODEL_NAME="WizardLM-2-8x22B" BIT_PRECISIONS=(6.0 5.5 5.0 4.5 4.0 3.5 3.25 3.0 2.75 2.5 2.25) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do LOCAL_FOLDER="/root/models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" REMOTE_FOLDER="Dracones/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" if [ ! -d "$LOCAL_FOLDER" ]; then huggingface-cli download --local-dir-use-symlinks=False --local-dir "${LOCAL_FOLDER}" "${REMOTE_FOLDER}" >> /root/download.log 2>&1 fi output=$(python test_inference.py -m "$LOCAL_FOLDER" -gs 40,40,40,40 -ed "$DATA_SET") score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" # rm -rf "${LOCAL_FOLDER}" done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="WizardLM-2-8x22B" # Define variables MODEL_DIR="/mnt/storage/models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(2.25) BIT_PRECISIONS=(5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "base_model": "microsoft/WizardLM-2-8x22B"}
Dracones/WizardLM-2-8x22B_exl2_4.5bpw
null
[ "transformers", "safetensors", "mixtral", "text-generation", "exl2", "en", "base_model:microsoft/WizardLM-2-8x22B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T22:05:40+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
WizardLM-2-8x22B - EXL2 4.5bpw ============================== This is a 4.5bpw EXL2 quant of microsoft/WizardLM-2-8x22B Details about the model can be found at the above model page. EXL2 Version ------------ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. Perplexity Scoring ------------------ Below are the perplexity scores for the EXL2 models. A lower score is better. ### Perplexity Script This was the script used for perplexity testing. Quant Details ------------- This is the script used for quantization.
[ "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
text-generation
transformers
# WizardLM-2-8x22B - EXL2 5.0bpw This is a 5.0bpw EXL2 quant of [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B) Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. | Quant Level | Perplexity Score | |-------------|------------------| | 7.0 | 4.5859 | | 6.0 | 4.6252 | | 5.5 | 4.6493 | | 5.0 | 4.6937 | | 4.5 | 4.8029 | | 4.0 | 4.9372 | | 3.5 | 5.1336 | | 3.25 | 5.3636 | | 3.0 | 5.5468 | | 2.75 | 5.8255 | | 2.5 | 6.3362 | | 2.25 | 7.7763 | ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 DATA_SET=/root/wikitext/wikitext-2-v1.parquet # Set the model name and bit size MODEL_NAME="WizardLM-2-8x22B" BIT_PRECISIONS=(6.0 5.5 5.0 4.5 4.0 3.5 3.25 3.0 2.75 2.5 2.25) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do LOCAL_FOLDER="/root/models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" REMOTE_FOLDER="Dracones/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" if [ ! -d "$LOCAL_FOLDER" ]; then huggingface-cli download --local-dir-use-symlinks=False --local-dir "${LOCAL_FOLDER}" "${REMOTE_FOLDER}" >> /root/download.log 2>&1 fi output=$(python test_inference.py -m "$LOCAL_FOLDER" -gs 40,40,40,40 -ed "$DATA_SET") score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" # rm -rf "${LOCAL_FOLDER}" done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="WizardLM-2-8x22B" # Define variables MODEL_DIR="/mnt/storage/models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(2.25) BIT_PRECISIONS=(5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "base_model": "microsoft/WizardLM-2-8x22B"}
Dracones/WizardLM-2-8x22B_exl2_5.0bpw
null
[ "transformers", "safetensors", "mixtral", "text-generation", "exl2", "en", "base_model:microsoft/WizardLM-2-8x22B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "5-bit", "region:us" ]
null
2024-04-15T22:06:51+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
WizardLM-2-8x22B - EXL2 5.0bpw ============================== This is a 5.0bpw EXL2 quant of microsoft/WizardLM-2-8x22B Details about the model can be found at the above model page. EXL2 Version ------------ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. Perplexity Scoring ------------------ Below are the perplexity scores for the EXL2 models. A lower score is better. ### Perplexity Script This was the script used for perplexity testing. Quant Details ------------- This is the script used for quantization.
[ "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tapt_helpfulness_base_pretraining_model This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4502 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 11 - total_train_batch_size: 352 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9099 | 0.97 | 27 | 1.6497 | | 1.716 | 1.98 | 55 | 1.6088 | | 1.6549 | 2.99 | 83 | 1.5624 | | 1.6585 | 3.97 | 110 | 1.5455 | | 1.557 | 4.98 | 138 | 1.5446 | | 1.5142 | 5.99 | 166 | 1.5057 | | 1.4788 | 7.0 | 194 | 1.4934 | | 1.5057 | 7.97 | 221 | 1.4714 | | 1.4232 | 8.98 | 249 | 1.4541 | | 1.3778 | 9.74 | 270 | 1.4498 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "roberta-base", "model-index": [{"name": "tapt_helpfulness_base_pretraining_model", "results": []}]}
BigTMiami/tapt_helpfulness_base_pretraining_model
null
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T22:07:21+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
tapt\_helpfulness\_base\_pretraining\_model =========================================== This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.4502 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 11 * total\_train\_batch\_size: 352 * optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.06 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 11\n* total\\_train\\_batch\\_size: 352\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 11\n* total\\_train\\_batch\\_size: 352\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Description [MaziyarPanahi/WizardLM-2-7B-AWQ](https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-AWQ) is a quantized (AWQ) version of [microsoft/WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B) ## How to use ### Install the necessary packages ``` pip install --upgrade accelerate autoawq transformers ``` ### Example Python code ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "MaziyarPanahi/WizardLM-2-7B-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id).to(0) text = "User:\nHello can you provide me with top-3 cool places to visit in Paris?\n\nAssistant:\n" inputs = tokenizer(text, return_tensors="pt").to(0) out = model.generate(**inputs, max_new_tokens=300) print(tokenizer.decode(out[0], skip_special_tokens=True)) ``` Results: ``` User: Hello can you provide me with top-3 cool places to visit in Paris? Assistant: Absolutely, here are my top-3 recommendations for must-see places in Paris: 1. The Eiffel Tower: An icon of Paris, this wrought-iron lattice tower is a global cultural icon of France and is among the most recognizable structures in the world. Climbing up to the top offers breathtaking views of the city. 2. The Louvre Museum: Home to thousands of works of art, the Louvre is the world's largest art museum and a historic monument in Paris. Must-see pieces include the Mona Lisa, the Winged Victory of Samothrace, and the Venus de Milo. 3. Notre-Dame Cathedral: This cathedral is a masterpiece of French Gothic architecture and is famous for its intricate stone carvings, beautiful stained glass, and its iconic twin towers. Be sure to spend some time exploring its history and learning about the fascinating restoration efforts post the 2019 fire. I hope you find these recommendations helpful and that they make for an enjoyable and memorable trip to Paris. Safe travels! ```
{"tags": ["finetuned", "quantized", "4-bit", "AWQ", "transformers", "safetensors", "mistral", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us"], "model_name": "WizardLM-2-7B-AWQ", "base_model": "microsoft/WizardLM-2-7B", "inference": false, "model_creator": "microsoft", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/WizardLM-2-7B-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "quantized", "4-bit", "AWQ", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:microsoft/WizardLM-2-7B" ]
null
2024-04-15T22:08:10+00:00
[ "2304.12244", "2306.08568", "2308.09583" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #finetuned #quantized #4-bit #AWQ #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #base_model-microsoft/WizardLM-2-7B
# Description MaziyarPanahi/WizardLM-2-7B-AWQ is a quantized (AWQ) version of microsoft/WizardLM-2-7B ## How to use ### Install the necessary packages ### Example Python code Results:
[ "# Description\nMaziyarPanahi/WizardLM-2-7B-AWQ is a quantized (AWQ) version of microsoft/WizardLM-2-7B", "## How to use", "### Install the necessary packages", "### Example Python code\n\n\n\n\nResults:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #finetuned #quantized #4-bit #AWQ #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #base_model-microsoft/WizardLM-2-7B \n", "# Description\nMaziyarPanahi/WizardLM-2-7B-AWQ is a quantized (AWQ) version of microsoft/WizardLM-2-7B", "## How to use", "### Install the necessary packages", "### Example Python code\n\n\n\n\nResults:" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
suneeln-duke/dukebot-qac-v2
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-15T22:08:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_4096_512_46M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 1.4795 - F1 Score: 0.7058 - Accuracy: 0.707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5752 | 20.0 | 200 | 0.5604 | 0.7170 | 0.72 | | 0.4409 | 40.0 | 400 | 0.5556 | 0.7500 | 0.75 | | 0.3652 | 60.0 | 600 | 0.5812 | 0.7613 | 0.764 | | 0.3083 | 80.0 | 800 | 0.5861 | 0.7786 | 0.78 | | 0.2614 | 100.0 | 1000 | 0.5752 | 0.7885 | 0.79 | | 0.2262 | 120.0 | 1200 | 0.6411 | 0.7720 | 0.776 | | 0.1963 | 140.0 | 1400 | 0.6353 | 0.7943 | 0.797 | | 0.1725 | 160.0 | 1600 | 0.6748 | 0.8021 | 0.804 | | 0.1537 | 180.0 | 1800 | 0.6053 | 0.8122 | 0.813 | | 0.1371 | 200.0 | 2000 | 0.7733 | 0.8024 | 0.804 | | 0.1227 | 220.0 | 2200 | 0.6877 | 0.8072 | 0.808 | | 0.1104 | 240.0 | 2400 | 0.7540 | 0.8058 | 0.807 | | 0.1012 | 260.0 | 2600 | 0.7419 | 0.7996 | 0.801 | | 0.0926 | 280.0 | 2800 | 0.8370 | 0.8049 | 0.807 | | 0.0861 | 300.0 | 3000 | 0.7632 | 0.8036 | 0.805 | | 0.0807 | 320.0 | 3200 | 0.8092 | 0.8065 | 0.808 | | 0.0728 | 340.0 | 3400 | 0.7931 | 0.8079 | 0.809 | | 0.0697 | 360.0 | 3600 | 0.8385 | 0.8023 | 0.804 | | 0.0631 | 380.0 | 3800 | 0.7814 | 0.7997 | 0.801 | | 0.0594 | 400.0 | 4000 | 0.8248 | 0.8048 | 0.806 | | 0.0565 | 420.0 | 4200 | 0.8527 | 0.8048 | 0.806 | | 0.0543 | 440.0 | 4400 | 0.8633 | 0.7912 | 0.793 | | 0.051 | 460.0 | 4600 | 0.8992 | 0.7958 | 0.798 | | 0.0482 | 480.0 | 4800 | 0.9369 | 0.8042 | 0.806 | | 0.0456 | 500.0 | 5000 | 0.9993 | 0.7860 | 0.789 | | 0.0427 | 520.0 | 5200 | 0.8551 | 0.8079 | 0.809 | | 0.0413 | 540.0 | 5400 | 0.8924 | 0.8088 | 0.81 | | 0.0388 | 560.0 | 5600 | 0.8621 | 0.8081 | 0.809 | | 0.037 | 580.0 | 5800 | 0.9148 | 0.8037 | 0.805 | | 0.0364 | 600.0 | 6000 | 0.9264 | 0.8150 | 0.816 | | 0.0348 | 620.0 | 6200 | 0.9849 | 0.8006 | 0.803 | | 0.0337 | 640.0 | 6400 | 0.8709 | 0.8069 | 0.808 | | 0.0327 | 660.0 | 6600 | 0.9665 | 0.8065 | 0.808 | | 0.0312 | 680.0 | 6800 | 0.9153 | 0.8052 | 0.807 | | 0.0291 | 700.0 | 7000 | 0.9097 | 0.8037 | 0.805 | | 0.0285 | 720.0 | 7200 | 0.9881 | 0.8021 | 0.804 | | 0.0276 | 740.0 | 7400 | 0.9819 | 0.8097 | 0.811 | | 0.0273 | 760.0 | 7600 | 0.8886 | 0.8183 | 0.819 | | 0.0268 | 780.0 | 7800 | 0.9221 | 0.8120 | 0.813 | | 0.0252 | 800.0 | 8000 | 0.9351 | 0.8190 | 0.82 | | 0.0247 | 820.0 | 8200 | 0.9857 | 0.8044 | 0.806 | | 0.0238 | 840.0 | 8400 | 0.9679 | 0.8115 | 0.813 | | 0.0239 | 860.0 | 8600 | 0.9835 | 0.8066 | 0.808 | | 0.023 | 880.0 | 8800 | 0.9684 | 0.8087 | 0.81 | | 0.0224 | 900.0 | 9000 | 0.9745 | 0.8160 | 0.817 | | 0.0215 | 920.0 | 9200 | 0.9475 | 0.8120 | 0.813 | | 0.0219 | 940.0 | 9400 | 0.9775 | 0.8097 | 0.811 | | 0.0214 | 960.0 | 9600 | 0.9836 | 0.8107 | 0.812 | | 0.0208 | 980.0 | 9800 | 1.0088 | 0.8117 | 0.813 | | 0.0207 | 1000.0 | 10000 | 0.9893 | 0.8118 | 0.813 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_4-seqsight_4096_512_46M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_4096_512_46M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-15T22:08:41+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_tf\_4-seqsight\_4096\_512\_46M-L32\_all ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset. It achieves the following results on the evaluation set: * Loss: 1.4795 * F1 Score: 0.7058 * Accuracy: 0.707 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
adhi29/model_albert_512_token_classification
null
[ "transformers", "safetensors", "albert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T22:09:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #albert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #albert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_4096_512_46M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.6572 - F1 Score: 0.6237 - Accuracy: 0.624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6563 | 14.29 | 200 | 0.6304 | 0.6227 | 0.637 | | 0.5872 | 28.57 | 400 | 0.6372 | 0.6503 | 0.651 | | 0.5501 | 42.86 | 600 | 0.6450 | 0.6560 | 0.656 | | 0.5151 | 57.14 | 800 | 0.6870 | 0.6278 | 0.629 | | 0.4835 | 71.43 | 1000 | 0.6903 | 0.6377 | 0.639 | | 0.4521 | 85.71 | 1200 | 0.7085 | 0.6390 | 0.639 | | 0.4224 | 100.0 | 1400 | 0.7422 | 0.6236 | 0.624 | | 0.394 | 114.29 | 1600 | 0.7565 | 0.6261 | 0.626 | | 0.3672 | 128.57 | 1800 | 0.7998 | 0.6250 | 0.625 | | 0.3427 | 142.86 | 2000 | 0.8862 | 0.6207 | 0.621 | | 0.3219 | 157.14 | 2200 | 0.8307 | 0.6257 | 0.628 | | 0.2994 | 171.43 | 2400 | 0.8558 | 0.6278 | 0.628 | | 0.2812 | 185.71 | 2600 | 0.9317 | 0.6318 | 0.633 | | 0.266 | 200.0 | 2800 | 0.8704 | 0.6359 | 0.636 | | 0.249 | 214.29 | 3000 | 0.9145 | 0.6327 | 0.633 | | 0.2349 | 228.57 | 3200 | 0.9150 | 0.6181 | 0.618 | | 0.2201 | 242.86 | 3400 | 0.9539 | 0.6229 | 0.623 | | 0.2078 | 257.14 | 3600 | 0.9723 | 0.6288 | 0.629 | | 0.1969 | 271.43 | 3800 | 0.9980 | 0.6337 | 0.634 | | 0.1882 | 285.71 | 4000 | 0.9753 | 0.6371 | 0.637 | | 0.1785 | 300.0 | 4200 | 1.0100 | 0.6281 | 0.628 | | 0.1704 | 314.29 | 4400 | 1.0297 | 0.6281 | 0.628 | | 0.1634 | 328.57 | 4600 | 1.0690 | 0.6405 | 0.641 | | 0.1552 | 342.86 | 4800 | 1.1005 | 0.6301 | 0.63 | | 0.1489 | 357.14 | 5000 | 1.1284 | 0.644 | 0.644 | | 0.1425 | 371.43 | 5200 | 1.0903 | 0.6331 | 0.633 | | 0.1376 | 385.71 | 5400 | 1.0982 | 0.6340 | 0.634 | | 0.1322 | 400.0 | 5600 | 1.1406 | 0.6341 | 0.634 | | 0.1281 | 414.29 | 5800 | 1.1843 | 0.6421 | 0.642 | | 0.1234 | 428.57 | 6000 | 1.1615 | 0.6330 | 0.633 | | 0.1198 | 442.86 | 6200 | 1.1862 | 0.6391 | 0.639 | | 0.1159 | 457.14 | 6400 | 1.1801 | 0.6338 | 0.634 | | 0.1126 | 471.43 | 6600 | 1.1569 | 0.63 | 0.63 | | 0.1087 | 485.71 | 6800 | 1.2002 | 0.6311 | 0.631 | | 0.105 | 500.0 | 7000 | 1.1850 | 0.6341 | 0.634 | | 0.104 | 514.29 | 7200 | 1.1667 | 0.6260 | 0.626 | | 0.1002 | 528.57 | 7400 | 1.2212 | 0.6280 | 0.628 | | 0.0976 | 542.86 | 7600 | 1.2341 | 0.6331 | 0.633 | | 0.096 | 557.14 | 7800 | 1.2224 | 0.6431 | 0.643 | | 0.0941 | 571.43 | 8000 | 1.2070 | 0.6421 | 0.642 | | 0.0923 | 585.71 | 8200 | 1.2359 | 0.6381 | 0.638 | | 0.0903 | 600.0 | 8400 | 1.2435 | 0.6361 | 0.636 | | 0.0883 | 614.29 | 8600 | 1.2519 | 0.6401 | 0.64 | | 0.087 | 628.57 | 8800 | 1.2604 | 0.6350 | 0.635 | | 0.0849 | 642.86 | 9000 | 1.2590 | 0.6391 | 0.639 | | 0.0847 | 657.14 | 9200 | 1.2590 | 0.6421 | 0.642 | | 0.0838 | 671.43 | 9400 | 1.2622 | 0.6431 | 0.643 | | 0.0829 | 685.71 | 9600 | 1.2622 | 0.6351 | 0.635 | | 0.0816 | 700.0 | 9800 | 1.2662 | 0.6401 | 0.64 | | 0.0809 | 714.29 | 10000 | 1.2650 | 0.6441 | 0.644 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_3-seqsight_4096_512_46M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_4096_512_46M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-15T22:13:25+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_tf\_3-seqsight\_4096\_512\_46M-L32\_all ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.6572 * F1 Score: 0.6237 * Accuracy: 0.624 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistralv1_spectral_r8_5e5_e3 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistralv1_spectral_r8_5e5_e3", "results": []}]}
fangzhaoz/mistralv1_spectral_r8_5e5_e3
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-15T22:14:03+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
# mistralv1_spectral_r8_5e5_e3 This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# mistralv1_spectral_r8_5e5_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "# mistralv1_spectral_r8_5e5_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
guoyu-zhang/hh_shp1_dpo3
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T22:15:47+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
xugefu/openchat_3.5-touch-rugby-rules-adapters
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T22:16:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fangzhaoz/mistralv1_spectral_r8_5e5_e3_merged
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T22:18:49+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
This is a model fine-tuned on my custom dataset. The original model is here: https://huggingface.co/kevinscaria/joint_tk-instruct-base-def-pos-neg-neut-combined @inproceedings{Scaria2023InstructABSAIL, title={InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis}, author={Kevin Scaria and Himanshu Gupta and Saurabh Arjun Sawant and Swaroop Mishra and Chitta Baral}, year={2023} }
{}
Homeskills/mt_instruct_absa
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-15T22:20:33+00:00
[]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
This is a model fine-tuned on my custom dataset. The original model is here: URL @inproceedings{Scaria2023InstructABSAIL, title={InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis}, author={Kevin Scaria and Himanshu Gupta and Saurabh Arjun Sawant and Swaroop Mishra and Chitta Baral}, year={2023} }
[]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-v3-base-on-custom-kural-500 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2705 - Accuracy: 0.93 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 25 | 0.4294 | 0.85 | | No log | 2.0 | 50 | 0.2183 | 0.92 | | No log | 3.0 | 75 | 0.4484 | 0.88 | | No log | 4.0 | 100 | 0.5041 | 0.87 | | No log | 5.0 | 125 | 0.2482 | 0.93 | | No log | 6.0 | 150 | 0.9998 | 0.81 | | No log | 7.0 | 175 | 0.2305 | 0.94 | | No log | 8.0 | 200 | 0.2145 | 0.95 | | No log | 9.0 | 225 | 0.2428 | 0.94 | | No log | 10.0 | 250 | 0.2705 | 0.93 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/mdeberta-v3-base", "model-index": [{"name": "mdeberta-v3-base-on-custom-kural-500", "results": []}]}
bikram22pi7/mdeberta-v3-base-on-custom-kural-500
null
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/mdeberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T22:23:52+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/mdeberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
mdeberta-v3-base-on-custom-kural-500 ==================================== This model is a fine-tuned version of microsoft/mdeberta-v3-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.2705 * Accuracy: 0.93 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/mdeberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ruBert-base-sberquad-0.001-len_2-filtered-negative-v2 This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.001-len_2-filtered-negative-v2", "results": []}]}
Shalazary/ruBert-base-sberquad-0.001-len_2-filtered-negative-v2
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-base", "license:apache-2.0", "region:us" ]
null
2024-04-15T22:24:34+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
# ruBert-base-sberquad-0.001-len_2-filtered-negative-v2 This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# ruBert-base-sberquad-0.001-len_2-filtered-negative-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n", "# ruBert-base-sberquad-0.001-len_2-filtered-negative-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
bdsaglam/llama-2-7b-chat-jerx-peft-5fr5ra4m
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T22:26:06+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# CalmexperimentT3qm7-7B CalmexperimentT3qm7-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: allknowingroger/CalmExperiment-7B-slerp - model: nlpguy/T3QM7 merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/CalmexperimentT3qm7-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
automerger/CalmexperimentT3qm7-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T22:26:15+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CalmexperimentT3qm7-7B CalmexperimentT3qm7-7B is an automated merge created by Maxime Labonne using the following configuration. ## Configuration ## Usage
[ "# CalmexperimentT3qm7-7B\n\nCalmexperimentT3qm7-7B is an automated merge created by Maxime Labonne using the following configuration.", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CalmexperimentT3qm7-7B\n\nCalmexperimentT3qm7-7B is an automated merge created by Maxime Labonne using the following configuration.", "## Configuration", "## Usage" ]
summarization
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xsum_unaligned_smallT5 This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 200000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "base_model": "google-t5/t5-small", "model-index": [{"name": "xsum_unaligned_smallT5", "results": []}]}
paulh27/xsum_unaligned_smallT5
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "summarization", "generated_from_trainer", "base_model:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T22:27:43+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #summarization #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# xsum_unaligned_smallT5 This model is a fine-tuned version of google-t5/t5-small on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 200000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# xsum_unaligned_smallT5\n\nThis model is a fine-tuned version of google-t5/t5-small on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 200000\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #summarization #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# xsum_unaligned_smallT5\n\nThis model is a fine-tuned version of google-t5/t5-small on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 200000\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper whisper-large-v3 ar1 - Mohamed Shaaban This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common standard ar Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4220 - Wer: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5721 | 1.0 | 1 | 2.1602 | 100.0 | | 0.5723 | 2.0 | 2 | 1.0610 | 33.3333 | | 0.1861 | 3.0 | 3 | 0.6003 | 33.3333 | | 0.0478 | 4.0 | 4 | 0.4661 | 0.0 | | 0.0262 | 5.0 | 5 | 0.4220 | 0.0 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["ar"], "license": "apache-2.0", "tags": ["whisper-event", "generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-large-v3", "model-index": [{"name": "Whisper whisper-large-v3\t ar1 - Mohamed Shaaban", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common standard ar Voice 11.0", "type": "mozilla-foundation/common_voice_11_0"}, "metrics": [{"type": "wer", "value": 0.0, "name": "Wer"}]}]}]}
Mohamedshaaban2001/MSDC-whisper-large-v3-56
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "ar", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-large-v3", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-15T22:28:42+00:00
[]
[ "ar" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #whisper-event #generated_from_trainer #ar #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper whisper-large-v3 ar1 - Mohamed Shaaban ============================================== This model is a fine-tuned version of openai/whisper-large-v3 on the Common standard ar Voice 11.0 dataset. It achieves the following results on the evaluation set: * Loss: 0.4220 * Wer: 0.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #whisper-event #generated_from_trainer #ar #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
reinforcement-learning
stable-baselines3
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga amine-01 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga amine-01 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga amine-01 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "719.00 +/- 281.42", "name": "mean_reward", "verified": false}]}]}]}
amine-01/dqn-SpaceInvadersNoFrameskip-v4
null
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-15T22:32:09+00:00
[]
[]
TAGS #stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# DQN Agent playing SpaceInvadersNoFrameskip-v4 This is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4 using the stable-baselines3 library and the RL Zoo. The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: URL SB3: URL SB3 Contrib: URL Install the RL Zoo (with SB3 and SB3-Contrib): If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do: ## Training (with the RL Zoo) ## Hyperparameters # Environment Arguments
[ "# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:", "## Training (with the RL Zoo)", "## Hyperparameters", "# Environment Arguments" ]
[ "TAGS\n#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:", "## Training (with the RL Zoo)", "## Hyperparameters", "# Environment Arguments" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "246.92 +/- 17.75", "name": "mean_reward", "verified": false}]}]}]}
NugentMichael/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-15T22:33:53+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
text-generation
transformers
# microsoft/WizardLM-2-7B AWQ - Model creator: [microsoft](https://huggingface.co/microsoft) - Original model: [WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B) ## Model Summary We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. ## How to use ### Install the necessary packages ```bash pip install --upgrade accelerate autoawq autoawq-kernels transformers ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/WizardLM-2-7B-AWQ" system_message = "You are WizardLM, incarnated as a powerful AI." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Prompt template: ChatML ```plaintext <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
{"license": "apache-2.0", "tags": ["transformers", "safetensors", "mistral", "finetuned", "4-bit", "AWQ", "text-generation", "text-generation-inference", "autotrain_compatible", "endpoints_compatible", "chatml", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583"], "model_name": "WizardLM-2-7B", "model_creator": "microsoft", "base_model": "microsoft/WizardLM-2-7B", "inference": false, "pipeline_tag": "text-generation", "quantized_by": "Suparious"}
solidrust/WizardLM-2-7B-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "4-bit", "AWQ", "text-generation-inference", "autotrain_compatible", "endpoints_compatible", "chatml", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "base_model:microsoft/WizardLM-2-7B", "license:apache-2.0", "region:us" ]
null
2024-04-15T22:40:04+00:00
[ "2304.12244", "2306.08568", "2308.09583" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #finetuned #4-bit #AWQ #text-generation-inference #autotrain_compatible #endpoints_compatible #chatml #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #base_model-microsoft/WizardLM-2-7B #license-apache-2.0 #region-us
# microsoft/WizardLM-2-7B AWQ - Model creator: microsoft - Original model: WizardLM-2-7B ## Model Summary We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. ## How to use ### Install the necessary packages ### Example Python code ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - Text Generation Webui - using Loader: AutoAWQ - vLLM - version 0.2.2 or later for support for all model types. - Hugging Face Text Generation Inference (TGI) - Transformers version 4.35.0 and later, from any code or client that supports Transformers - AutoAWQ - for use from Python code ## Prompt template: ChatML
[ "# microsoft/WizardLM-2-7B AWQ\n\n- Model creator: microsoft\n- Original model: WizardLM-2-7B", "## Model Summary\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code", "## Prompt template: ChatML" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #finetuned #4-bit #AWQ #text-generation-inference #autotrain_compatible #endpoints_compatible #chatml #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #base_model-microsoft/WizardLM-2-7B #license-apache-2.0 #region-us \n", "# microsoft/WizardLM-2-7B AWQ\n\n- Model creator: microsoft\n- Original model: WizardLM-2-7B", "## Model Summary\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code", "## Prompt template: ChatML" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceM4/idefics2-8b", "model-index": [{"name": "idefics2-8b-docvqa-finetuned-tutorial", "results": []}]}
nkasmanoff/idefics2-8b-docvqa-finetuned-tutorial
null
[ "safetensors", "generated_from_trainer", "base_model:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "region:us" ]
null
2024-04-15T22:41:00+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us
# idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# idefics2-8b-docvqa-finetuned-tutorial\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us \n", "# idefics2-8b-docvqa-finetuned-tutorial\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
keras
## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | False | | is_legacy_optimizer | False | | learning_rate | 0.0010000000474974513 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 |
{"library_name": "keras"}
anrhi/mobile_v2__fake_image_Mb_detection
null
[ "keras", "region:us" ]
null
2024-04-15T22:42:01+00:00
[]
[]
TAGS #keras #region-us
Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training:
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:" ]
[ "TAGS\n#keras #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
xPXXX/Llama2_finetune_wikipedianq
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T22:42:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
num_epochs = 15000 gradient_accumulation_steps = 1 learning_rate = 7e-5 lr_warmup_steps = 500 [ transforms.Resize((config.image_size, config.image_size)), transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ] sample_size=config.image_size, # the target image resolution in_channels=3, # the number of input channels, 3 for RGB images out_channels=3, # the number of output channels layers_per_block=2, # how many ResNet layers to use per UNet block block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block down_block_types=( "DownBlock2D", # a regular ResNet downsampling block "DownBlock2D", "DownBlock2D", "DownBlock2D", "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention "DownBlock2D", ),
{}
Giux22/semana6-patrones_concentricos_con_fondo
null
[ "region:us" ]
null
2024-04-15T22:47:44+00:00
[]
[]
TAGS #region-us
num_epochs = 15000 gradient_accumulation_steps = 1 learning_rate = 7e-5 lr_warmup_steps = 500 [ transforms.Resize((config.image_size, config.image_size)), transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ] sample_size=config.image_size, # the target image resolution in_channels=3, # the number of input channels, 3 for RGB images out_channels=3, # the number of output channels layers_per_block=2, # how many ResNet layers to use per UNet block block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block down_block_types=( "DownBlock2D", # a regular ResNet downsampling block "DownBlock2D", "DownBlock2D", "DownBlock2D", "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention "DownBlock2D", ),
[ "# the target image resolution\n in_channels=3, # the number of input channels, 3 for RGB images\n out_channels=3, # the number of output channels\n layers_per_block=2, # how many ResNet layers to use per UNet block\n block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block\n down_block_types=(\n \"DownBlock2D\", # a regular ResNet downsampling block\n \"DownBlock2D\",\n \"DownBlock2D\",\n \"DownBlock2D\",\n \"AttnDownBlock2D\", # a ResNet downsampling block with spatial self-attention\n \"DownBlock2D\",\n )," ]
[ "TAGS\n#region-us \n", "# the target image resolution\n in_channels=3, # the number of input channels, 3 for RGB images\n out_channels=3, # the number of output channels\n layers_per_block=2, # how many ResNet layers to use per UNet block\n block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block\n down_block_types=(\n \"DownBlock2D\", # a regular ResNet downsampling block\n \"DownBlock2D\",\n \"DownBlock2D\",\n \"DownBlock2D\",\n \"AttnDownBlock2D\", # a ResNet downsampling block with spatial self-attention\n \"DownBlock2D\",\n )," ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_4096_512_46M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 1.1486 - F1 Score: 0.6960 - Accuracy: 0.696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6122 | 20.0 | 200 | 0.6339 | 0.6688 | 0.672 | | 0.5115 | 40.0 | 400 | 0.6670 | 0.6639 | 0.665 | | 0.4603 | 60.0 | 600 | 0.6962 | 0.6700 | 0.67 | | 0.4113 | 80.0 | 800 | 0.7363 | 0.6592 | 0.66 | | 0.3605 | 100.0 | 1000 | 0.8646 | 0.6239 | 0.634 | | 0.3189 | 120.0 | 1200 | 0.8720 | 0.6500 | 0.65 | | 0.2826 | 140.0 | 1400 | 0.9056 | 0.6502 | 0.651 | | 0.2504 | 160.0 | 1600 | 0.9260 | 0.6538 | 0.654 | | 0.2259 | 180.0 | 1800 | 1.0555 | 0.6602 | 0.661 | | 0.2036 | 200.0 | 2000 | 1.0355 | 0.6640 | 0.664 | | 0.1813 | 220.0 | 2200 | 1.1234 | 0.6623 | 0.664 | | 0.1682 | 240.0 | 2400 | 1.0758 | 0.6656 | 0.666 | | 0.1523 | 260.0 | 2600 | 1.1427 | 0.6660 | 0.666 | | 0.1411 | 280.0 | 2800 | 1.1675 | 0.6587 | 0.659 | | 0.131 | 300.0 | 3000 | 1.1165 | 0.6690 | 0.669 | | 0.1201 | 320.0 | 3200 | 1.1777 | 0.6710 | 0.671 | | 0.1142 | 340.0 | 3400 | 1.2282 | 0.6708 | 0.671 | | 0.107 | 360.0 | 3600 | 1.2469 | 0.6810 | 0.681 | | 0.0994 | 380.0 | 3800 | 1.2036 | 0.6718 | 0.672 | | 0.0924 | 400.0 | 4000 | 1.2638 | 0.6708 | 0.671 | | 0.088 | 420.0 | 4200 | 1.3391 | 0.6850 | 0.685 | | 0.0826 | 440.0 | 4400 | 1.3259 | 0.6778 | 0.678 | | 0.0786 | 460.0 | 4600 | 1.4104 | 0.6809 | 0.681 | | 0.0747 | 480.0 | 4800 | 1.2826 | 0.6739 | 0.674 | | 0.0718 | 500.0 | 5000 | 1.3946 | 0.6770 | 0.677 | | 0.0683 | 520.0 | 5200 | 1.3975 | 0.6678 | 0.668 | | 0.0646 | 540.0 | 5400 | 1.4444 | 0.6728 | 0.673 | | 0.0615 | 560.0 | 5600 | 1.4051 | 0.6654 | 0.666 | | 0.0594 | 580.0 | 5800 | 1.4298 | 0.6658 | 0.666 | | 0.0572 | 600.0 | 6000 | 1.4528 | 0.6745 | 0.675 | | 0.0555 | 620.0 | 6200 | 1.4730 | 0.6730 | 0.673 | | 0.0527 | 640.0 | 6400 | 1.4635 | 0.6679 | 0.668 | | 0.0508 | 660.0 | 6600 | 1.4890 | 0.6729 | 0.673 | | 0.0488 | 680.0 | 6800 | 1.5215 | 0.6749 | 0.675 | | 0.0477 | 700.0 | 7000 | 1.4580 | 0.6840 | 0.684 | | 0.0466 | 720.0 | 7200 | 1.4903 | 0.6820 | 0.682 | | 0.0449 | 740.0 | 7400 | 1.4816 | 0.6810 | 0.681 | | 0.044 | 760.0 | 7600 | 1.5380 | 0.6760 | 0.676 | | 0.0428 | 780.0 | 7800 | 1.5270 | 0.6749 | 0.675 | | 0.0414 | 800.0 | 8000 | 1.5375 | 0.6780 | 0.678 | | 0.04 | 820.0 | 8200 | 1.5700 | 0.6790 | 0.679 | | 0.039 | 840.0 | 8400 | 1.5338 | 0.6790 | 0.679 | | 0.0386 | 860.0 | 8600 | 1.5600 | 0.6780 | 0.678 | | 0.0377 | 880.0 | 8800 | 1.5343 | 0.6730 | 0.673 | | 0.0365 | 900.0 | 9000 | 1.5498 | 0.6770 | 0.677 | | 0.0356 | 920.0 | 9200 | 1.5850 | 0.6780 | 0.678 | | 0.0357 | 940.0 | 9400 | 1.5876 | 0.684 | 0.684 | | 0.0351 | 960.0 | 9600 | 1.5998 | 0.6780 | 0.678 | | 0.0343 | 980.0 | 9800 | 1.5825 | 0.6810 | 0.681 | | 0.035 | 1000.0 | 10000 | 1.5793 | 0.6800 | 0.68 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_2-seqsight_4096_512_46M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_4096_512_46M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-15T22:48:12+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_tf\_2-seqsight\_4096\_512\_46M-L32\_all ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset. It achieves the following results on the evaluation set: * Loss: 1.1486 * F1 Score: 0.6960 * Accuracy: 0.696 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
### Ghst-Cllgs on Stable Diffusion via Dreambooth #### model by ina-hre This your the Stable Diffusion model fine-tuned the Ghst-Cllgs concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **in the style of ghst-cllgs** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img23.PNG) ![image 1](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img10.JPG) ![image 2](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img02.JPG) ![image 3](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img04.jpg) ![image 4](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img05.jpg) ![image 5](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img06.jpg) ![image 6](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img18.jpg) ![image 7](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img19.jpg) ![image 8](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img20.jpg) ![image 9](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img36.jpg) ![image 10](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img33.jpg) ![image 11](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img34.jpg) ![image 12](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img35.jpg) ![image 13](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img28.jpg) ![image 14](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img27.jpg) ![image 15](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img29.jpg) ![image 16](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img12.jpeg) ![image 17](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img07.jpeg) ![image 18](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img14.jpeg) ![image 19](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img13.jpeg) ![image 20](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img09.jpeg) ![image 21](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img08.jpeg) ![image 22](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img31.jpeg) ![image 23](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img17.jpeg) ![image 24](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img11.jpeg) ![image 25](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img30.jpeg) ![image 26](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img16.jpeg) ![image 27](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img15.jpeg) ![image 28](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img32.jpeg) ![image 29](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img03.jpg) ![image 30](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img22.jpg) ![image 31](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img21.jpg) ![image 32](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img24.jpg) ![image 33](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img25.jpg) ![image 34](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img26.jpg) ![image 35](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img37.jpeg) ![image 36](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img38.jpeg) ![image 37](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img39.jpeg) ![image 38](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img40.jpg) ![image 39](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img41.jpg) ![image 40](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img42.jpg) ![image 41](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img43.png) ![image 42](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img01.png) ![image 43](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img46.jpg) ![image 44](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img45.jpg) ![image 45](https://huggingface.co/ina-hre/ghst-cllgs/resolve/main/concept_images/resized_img44.jpg)
{"license": "creativeml-openrail-m", "tags": ["text-to-image"]}
ina-hre/ghst-cllgs
null
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-15T22:49:01+00:00
[]
[]
TAGS #diffusers #safetensors #text-to-image #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
### Ghst-Cllgs on Stable Diffusion via Dreambooth #### model by ina-hre This your the Stable Diffusion model fine-tuned the Ghst-Cllgs concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the 'instance_prompt': in the style of ghst-cllgs You can also train your own concepts and upload them to the library by using this notebook. And you can run your new concept via 'diffusers': Colab Notebook for Inference, Spaces with the Public Concepts loaded Here are the images used for training this concept: !image 0 !image 1 !image 2 !image 3 !image 4 !image 5 !image 6 !image 7 !image 8 !image 9 !image 10 !image 11 !image 12 !image 13 !image 14 !image 15 !image 16 !image 17 !image 18 !image 19 !image 20 !image 21 !image 22 !image 23 !image 24 !image 25 !image 26 !image 27 !image 28 !image 29 !image 30 !image 31 !image 32 !image 33 !image 34 !image 35 !image 36 !image 37 !image 38 !image 39 !image 40 !image 41 !image 42 !image 43 !image 44 !image 45
[ "### Ghst-Cllgs on Stable Diffusion via Dreambooth", "#### model by ina-hre\nThis your the Stable Diffusion model fine-tuned the Ghst-Cllgs concept taught to Stable Diffusion with Dreambooth.\nIt can be used by modifying the 'instance_prompt': in the style of ghst-cllgs\n\nYou can also train your own concepts and upload them to the library by using this notebook.\nAnd you can run your new concept via 'diffusers': Colab Notebook for Inference, Spaces with the Public Concepts loaded\n\nHere are the images used for training this concept:\n!image 0\n!image 1\n!image 2\n!image 3\n!image 4\n!image 5\n!image 6\n!image 7\n!image 8\n!image 9\n!image 10\n!image 11\n!image 12\n!image 13\n!image 14\n!image 15\n!image 16\n!image 17\n!image 18\n!image 19\n!image 20\n!image 21\n!image 22\n!image 23\n!image 24\n!image 25\n!image 26\n!image 27\n!image 28\n!image 29\n!image 30\n!image 31\n!image 32\n!image 33\n!image 34\n!image 35\n!image 36\n!image 37\n!image 38\n!image 39\n!image 40\n!image 41\n!image 42\n!image 43\n!image 44\n!image 45" ]
[ "TAGS\n#diffusers #safetensors #text-to-image #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "### Ghst-Cllgs on Stable Diffusion via Dreambooth", "#### model by ina-hre\nThis your the Stable Diffusion model fine-tuned the Ghst-Cllgs concept taught to Stable Diffusion with Dreambooth.\nIt can be used by modifying the 'instance_prompt': in the style of ghst-cllgs\n\nYou can also train your own concepts and upload them to the library by using this notebook.\nAnd you can run your new concept via 'diffusers': Colab Notebook for Inference, Spaces with the Public Concepts loaded\n\nHere are the images used for training this concept:\n!image 0\n!image 1\n!image 2\n!image 3\n!image 4\n!image 5\n!image 6\n!image 7\n!image 8\n!image 9\n!image 10\n!image 11\n!image 12\n!image 13\n!image 14\n!image 15\n!image 16\n!image 17\n!image 18\n!image 19\n!image 20\n!image 21\n!image 22\n!image 23\n!image 24\n!image 25\n!image 26\n!image 27\n!image 28\n!image 29\n!image 30\n!image 31\n!image 32\n!image 33\n!image 34\n!image 35\n!image 36\n!image 37\n!image 38\n!image 39\n!image 40\n!image 41\n!image 42\n!image 43\n!image 44\n!image 45" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cackerman/rewrites_llama13bchat_4bit_ft_full2
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T22:49:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_8192_512_17M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4773 - F1 Score: 0.8059 - Accuracy: 0.8059 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:| | 0.4702 | 66.67 | 200 | 0.4911 | 0.7977 | 0.7977 | | 0.3398 | 133.33 | 400 | 0.5058 | 0.7978 | 0.7977 | | 0.2551 | 200.0 | 600 | 0.5752 | 0.7897 | 0.7896 | | 0.1994 | 266.67 | 800 | 0.6860 | 0.7779 | 0.7781 | | 0.1567 | 333.33 | 1000 | 0.7631 | 0.7828 | 0.7830 | | 0.1247 | 400.0 | 1200 | 0.8909 | 0.7909 | 0.7912 | | 0.1032 | 466.67 | 1400 | 0.9545 | 0.7928 | 0.7928 | | 0.0852 | 533.33 | 1600 | 0.9998 | 0.7831 | 0.7830 | | 0.0736 | 600.0 | 1800 | 1.0842 | 0.7863 | 0.7863 | | 0.0626 | 666.67 | 2000 | 1.2115 | 0.7895 | 0.7896 | | 0.0561 | 733.33 | 2200 | 1.2140 | 0.7798 | 0.7798 | | 0.05 | 800.0 | 2400 | 1.2165 | 0.7750 | 0.7749 | | 0.0452 | 866.67 | 2600 | 1.3222 | 0.7816 | 0.7814 | | 0.0426 | 933.33 | 2800 | 1.3354 | 0.7812 | 0.7814 | | 0.039 | 1000.0 | 3000 | 1.3340 | 0.7767 | 0.7765 | | 0.0354 | 1066.67 | 3200 | 1.3500 | 0.7783 | 0.7781 | | 0.0334 | 1133.33 | 3400 | 1.3776 | 0.7880 | 0.7879 | | 0.0307 | 1200.0 | 3600 | 1.4558 | 0.7815 | 0.7814 | | 0.0274 | 1266.67 | 3800 | 1.4505 | 0.7848 | 0.7847 | | 0.0268 | 1333.33 | 4000 | 1.4297 | 0.7799 | 0.7798 | | 0.0248 | 1400.0 | 4200 | 1.5156 | 0.7831 | 0.7830 | | 0.0248 | 1466.67 | 4400 | 1.5417 | 0.7734 | 0.7732 | | 0.0224 | 1533.33 | 4600 | 1.5354 | 0.7847 | 0.7847 | | 0.0219 | 1600.0 | 4800 | 1.5788 | 0.7815 | 0.7814 | | 0.0206 | 1666.67 | 5000 | 1.5842 | 0.7799 | 0.7798 | | 0.0198 | 1733.33 | 5200 | 1.5731 | 0.7767 | 0.7765 | | 0.0188 | 1800.0 | 5400 | 1.6532 | 0.7718 | 0.7716 | | 0.0182 | 1866.67 | 5600 | 1.6222 | 0.7685 | 0.7684 | | 0.0175 | 1933.33 | 5800 | 1.6634 | 0.7701 | 0.7700 | | 0.0169 | 2000.0 | 6000 | 1.7323 | 0.7718 | 0.7716 | | 0.0163 | 2066.67 | 6200 | 1.6867 | 0.7718 | 0.7716 | | 0.0163 | 2133.33 | 6400 | 1.6490 | 0.7701 | 0.7700 | | 0.0153 | 2200.0 | 6600 | 1.7200 | 0.7783 | 0.7781 | | 0.0147 | 2266.67 | 6800 | 1.7482 | 0.7620 | 0.7618 | | 0.0148 | 2333.33 | 7000 | 1.7389 | 0.7782 | 0.7781 | | 0.0139 | 2400.0 | 7200 | 1.7762 | 0.7734 | 0.7732 | | 0.0136 | 2466.67 | 7400 | 1.7786 | 0.7782 | 0.7781 | | 0.0135 | 2533.33 | 7600 | 1.7493 | 0.7749 | 0.7749 | | 0.0128 | 2600.0 | 7800 | 1.8206 | 0.7734 | 0.7732 | | 0.0127 | 2666.67 | 8000 | 1.7982 | 0.7718 | 0.7716 | | 0.0122 | 2733.33 | 8200 | 1.7764 | 0.7701 | 0.7700 | | 0.0126 | 2800.0 | 8400 | 1.7556 | 0.7620 | 0.7618 | | 0.0119 | 2866.67 | 8600 | 1.8039 | 0.7717 | 0.7716 | | 0.0118 | 2933.33 | 8800 | 1.8303 | 0.7718 | 0.7716 | | 0.0114 | 3000.0 | 9000 | 1.8208 | 0.7701 | 0.7700 | | 0.0115 | 3066.67 | 9200 | 1.8285 | 0.7685 | 0.7684 | | 0.0114 | 3133.33 | 9400 | 1.8476 | 0.7685 | 0.7684 | | 0.0115 | 3200.0 | 9600 | 1.8574 | 0.7734 | 0.7732 | | 0.0107 | 3266.67 | 9800 | 1.8656 | 0.7734 | 0.7732 | | 0.0109 | 3333.33 | 10000 | 1.8753 | 0.7717 | 0.7716 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_8192_512_17M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_8192_512_17M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_17M", "region:us" ]
null
2024-04-15T22:51:27+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
GUE\_prom\_prom\_300\_tata-seqsight\_8192\_512\_17M-L32\_all ============================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset. It achieves the following results on the evaluation set: * Loss: 0.4773 * F1 Score: 0.8059 * Accuracy: 0.8059 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
null
## 💫 Community Model> WizardLM-2-7B by Microsoft *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [Microsoft](https://huggingface.co/microsoft)<br> **Original model**: [WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2675](https://github.com/ggerganov/llama.cpp/releases/tag/b2675)<br> ## Model Summary: WizardLM 2 7B is a followup model to the original and highly successful WizardLM line of models. This model is trained to excel at multi-turn conversations, and does so very successfully, outclassing models more than twice its size.<br> This model should be used for general conversation and world knowledge, but as with most models these days will be relatively competent at coding and reasoning as well. ## Prompt Template: For now, you'll need to make your own template. Choose the `LM Studio Blank Preset` in your LM Studio. Then, set the system prompt to whatever you'd like (check the recommended one below), and set the following values:<br> `System Message Suffix`: ''<br> `User Message Prefix`: ' USER: '<br> `User Message Suffix`: ' ASSISTANT: ' Under the hood, the model will see a prompt that's formatted like so: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: </s> ``` ## Use case and examples WizardLM 2 was tuned for improved performance on complex chat, multilingual, reasoning and agent tasks. This makes it a great model to use when wanting to chat back and forth and have reasoning based discussions. ### World knowledge: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/ka5C0km4sZz5fhiiWU58V.png) ## Conversational: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/Xc4eZ6vsO0tSJyWiNFNPC.png) ## Coding: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/6QM1IlZy6Wc9U5T6z7b9y.png) ## Technical Details WizardLM 2 applies several new methods for training compared to the original iteration, truly showing just how much the Open Source AI world has advanced since their intial offerings. The first of which is Progress Learning. Rather than applying all training data at once, the team applied stage-by-stage training by partitioning the data into multiple sections and training on each one after the other. AI Align AI (AAA) is another new process, whereby various state-of-the-art LLMs are allowed to co-teach and improve from each other, using simulated chats, quality judging, and improvement suggestions. They also participate in self-teaching in a similar manor. The model then underwent Supervised Learning, Stage-DPO, and Evol-Instruct and Instruction&Process Supervised Reinforcement Learning (RLEIF) which uses an instruction quality reward model and a supervision reward model for more precise correctness. The results are a model that performs exceptionally well on the automatic MT-Bench evaluation. For more information, check the WizardLM2 blog post [here](https://wizardlm.github.io/WizardLM2/) ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. 🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for these quants, which improves the overall quality! ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
{"license": "apache-2.0", "quantized_by": "bartowski", "pipeline_tag": "text-generation", "lm_studio": {"param_count": "7b", "use_case": "general", "release_date": "15-04-2024", "model_creator": "microsoft", "prompt_template": "vicuna", "system_prompt": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.", "base_model": "mistral", "original_repo": "microsoft/WizardLM-2-7B"}}
lmstudio-community/WizardLM-2-7B-GGUF
null
[ "gguf", "text-generation", "license:apache-2.0", "region:us" ]
null
2024-04-15T22:51:45+00:00
[]
[]
TAGS #gguf #text-generation #license-apache-2.0 #region-us
## Community Model> WizardLM-2-7B by Microsoft * LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord*. Model creator: Microsoft<br> Original model: WizardLM-2-7B<br> GGUF quantization: provided by bartowski based on 'URL' release b2675<br> ## Model Summary: WizardLM 2 7B is a followup model to the original and highly successful WizardLM line of models. This model is trained to excel at multi-turn conversations, and does so very successfully, outclassing models more than twice its size.<br> This model should be used for general conversation and world knowledge, but as with most models these days will be relatively competent at coding and reasoning as well. ## Prompt Template: For now, you'll need to make your own template. Choose the 'LM Studio Blank Preset' in your LM Studio. Then, set the system prompt to whatever you'd like (check the recommended one below), and set the following values:<br> 'System Message Suffix': ''<br> 'User Message Prefix': ' USER: '<br> 'User Message Suffix': ' ASSISTANT: ' Under the hood, the model will see a prompt that's formatted like so: ## Use case and examples WizardLM 2 was tuned for improved performance on complex chat, multilingual, reasoning and agent tasks. This makes it a great model to use when wanting to chat back and forth and have reasoning based discussions. ### World knowledge: !image/png ## Conversational: !image/png ## Coding: !image/png ## Technical Details WizardLM 2 applies several new methods for training compared to the original iteration, truly showing just how much the Open Source AI world has advanced since their intial offerings. The first of which is Progress Learning. Rather than applying all training data at once, the team applied stage-by-stage training by partitioning the data into multiple sections and training on each one after the other. AI Align AI (AAA) is another new process, whereby various state-of-the-art LLMs are allowed to co-teach and improve from each other, using simulated chats, quality judging, and improvement suggestions. They also participate in self-teaching in a similar manor. The model then underwent Supervised Learning, Stage-DPO, and Evol-Instruct and Instruction&Process Supervised Reinforcement Learning (RLEIF) which uses an instruction quality reward model and a supervision reward model for more precise correctness. The results are a model that performs exceptionally well on the automatic MT-Bench evaluation. For more information, check the WizardLM2 blog post here ## Special thanks Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible. Special thanks to Kalomaze for his dataset (linked here) that was used for calculating the imatrix for these quants, which improves the overall quality! ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
[ "## Community Model> WizardLM-2-7B by Microsoft\n\n* LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord*.\n\nModel creator: Microsoft<br>\nOriginal model: WizardLM-2-7B<br>\nGGUF quantization: provided by bartowski based on 'URL' release b2675<br>", "## Model Summary:\n\nWizardLM 2 7B is a followup model to the original and highly successful WizardLM line of models. This model is trained to excel at multi-turn conversations, and does so very successfully, outclassing models more than twice its size.<br>\nThis model should be used for general conversation and world knowledge, but as with most models these days will be relatively competent at coding and reasoning as well.", "## Prompt Template:\n\nFor now, you'll need to make your own template. Choose the 'LM Studio Blank Preset' in your LM Studio.\n\nThen, set the system prompt to whatever you'd like (check the recommended one below), and set the following values:<br>\n'System Message Suffix': ''<br>\n'User Message Prefix': ' USER: '<br>\n'User Message Suffix': ' ASSISTANT: '\n\nUnder the hood, the model will see a prompt that's formatted like so:", "## Use case and examples\n\nWizardLM 2 was tuned for improved performance on complex chat, multilingual, reasoning and agent tasks. This makes it a great model to use when wanting to chat back and forth and have reasoning based discussions.", "### World knowledge:\n\n!image/png", "## Conversational:\n\n!image/png", "## Coding:\n\n!image/png", "## Technical Details\n\nWizardLM 2 applies several new methods for training compared to the original iteration, truly showing just how much the Open Source AI world has advanced since their intial offerings.\n\nThe first of which is Progress Learning. Rather than applying all training data at once, the team applied stage-by-stage training by partitioning the data into multiple sections and training on each one after the other.\n\nAI Align AI (AAA) is another new process, whereby various state-of-the-art LLMs are allowed to co-teach and improve from each other, using simulated chats, quality judging, and improvement suggestions. They also participate in self-teaching in a similar manor.\n\nThe model then underwent Supervised Learning, Stage-DPO, and Evol-Instruct and Instruction&Process Supervised Reinforcement Learning (RLEIF) which uses an instruction quality reward model and a supervision reward model for more precise correctness.\n\nThe results are a model that performs exceptionally well on the automatic MT-Bench evaluation. \n\nFor more information, check the WizardLM2 blog post here", "## Special thanks\n\n Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible.\n\n Special thanks to Kalomaze for his dataset (linked here) that was used for calculating the imatrix for these quants, which improves the overall quality!", "## Disclaimers\n\nLM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio." ]
[ "TAGS\n#gguf #text-generation #license-apache-2.0 #region-us \n", "## Community Model> WizardLM-2-7B by Microsoft\n\n* LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord*.\n\nModel creator: Microsoft<br>\nOriginal model: WizardLM-2-7B<br>\nGGUF quantization: provided by bartowski based on 'URL' release b2675<br>", "## Model Summary:\n\nWizardLM 2 7B is a followup model to the original and highly successful WizardLM line of models. This model is trained to excel at multi-turn conversations, and does so very successfully, outclassing models more than twice its size.<br>\nThis model should be used for general conversation and world knowledge, but as with most models these days will be relatively competent at coding and reasoning as well.", "## Prompt Template:\n\nFor now, you'll need to make your own template. Choose the 'LM Studio Blank Preset' in your LM Studio.\n\nThen, set the system prompt to whatever you'd like (check the recommended one below), and set the following values:<br>\n'System Message Suffix': ''<br>\n'User Message Prefix': ' USER: '<br>\n'User Message Suffix': ' ASSISTANT: '\n\nUnder the hood, the model will see a prompt that's formatted like so:", "## Use case and examples\n\nWizardLM 2 was tuned for improved performance on complex chat, multilingual, reasoning and agent tasks. This makes it a great model to use when wanting to chat back and forth and have reasoning based discussions.", "### World knowledge:\n\n!image/png", "## Conversational:\n\n!image/png", "## Coding:\n\n!image/png", "## Technical Details\n\nWizardLM 2 applies several new methods for training compared to the original iteration, truly showing just how much the Open Source AI world has advanced since their intial offerings.\n\nThe first of which is Progress Learning. Rather than applying all training data at once, the team applied stage-by-stage training by partitioning the data into multiple sections and training on each one after the other.\n\nAI Align AI (AAA) is another new process, whereby various state-of-the-art LLMs are allowed to co-teach and improve from each other, using simulated chats, quality judging, and improvement suggestions. They also participate in self-teaching in a similar manor.\n\nThe model then underwent Supervised Learning, Stage-DPO, and Evol-Instruct and Instruction&Process Supervised Reinforcement Learning (RLEIF) which uses an instruction quality reward model and a supervision reward model for more precise correctness.\n\nThe results are a model that performs exceptionally well on the automatic MT-Bench evaluation. \n\nFor more information, check the WizardLM2 blog post here", "## Special thanks\n\n Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible.\n\n Special thanks to Kalomaze for his dataset (linked here) that was used for calculating the imatrix for these quants, which improves the overall quality!", "## Disclaimers\n\nLM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio." ]
translation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Gopal-finetuned-custom-en-de This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "cc-by-4.0", "tags": ["translation", "generated_from_trainer"], "base_model": "Helsinki-NLP/opus-mt-en-de", "model-index": [{"name": "Gopal-finetuned-custom-en-de", "results": []}]}
Gopal1853/Gopal-finetuned-custom-en-de
null
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-de", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T22:52:28+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #marian #text2text-generation #translation #generated_from_trainer #base_model-Helsinki-NLP/opus-mt-en-de #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
# Gopal-finetuned-custom-en-de This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-de on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# Gopal-finetuned-custom-en-de\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-de on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 200\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #marian #text2text-generation #translation #generated_from_trainer #base_model-Helsinki-NLP/opus-mt-en-de #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Gopal-finetuned-custom-en-de\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-de on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 200\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_notata-seqsight_8192_512_17M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.1283 - F1 Score: 0.9576 - Accuracy: 0.9576 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.2155 | 9.52 | 200 | 0.1170 | 0.9559 | 0.9559 | | 0.1204 | 19.05 | 400 | 0.1064 | 0.9612 | 0.9612 | | 0.1103 | 28.57 | 600 | 0.1041 | 0.9629 | 0.9629 | | 0.1025 | 38.1 | 800 | 0.0988 | 0.9604 | 0.9604 | | 0.097 | 47.62 | 1000 | 0.0998 | 0.9627 | 0.9627 | | 0.0923 | 57.14 | 1200 | 0.0996 | 0.9638 | 0.9638 | | 0.0874 | 66.67 | 1400 | 0.0964 | 0.9649 | 0.9650 | | 0.0829 | 76.19 | 1600 | 0.0972 | 0.9638 | 0.9638 | | 0.0788 | 85.71 | 1800 | 0.0993 | 0.9648 | 0.9648 | | 0.0756 | 95.24 | 2000 | 0.1010 | 0.9633 | 0.9633 | | 0.0721 | 104.76 | 2200 | 0.1012 | 0.9655 | 0.9655 | | 0.068 | 114.29 | 2400 | 0.1072 | 0.9650 | 0.9650 | | 0.0656 | 123.81 | 2600 | 0.1038 | 0.9655 | 0.9655 | | 0.061 | 133.33 | 2800 | 0.1079 | 0.9649 | 0.9650 | | 0.0588 | 142.86 | 3000 | 0.1207 | 0.9616 | 0.9616 | | 0.0566 | 152.38 | 3200 | 0.1132 | 0.9644 | 0.9644 | | 0.0536 | 161.9 | 3400 | 0.1159 | 0.9642 | 0.9642 | | 0.0504 | 171.43 | 3600 | 0.1192 | 0.9644 | 0.9644 | | 0.0493 | 180.95 | 3800 | 0.1229 | 0.9636 | 0.9636 | | 0.0475 | 190.48 | 4000 | 0.1179 | 0.9644 | 0.9644 | | 0.045 | 200.0 | 4200 | 0.1272 | 0.9640 | 0.9640 | | 0.0426 | 209.52 | 4400 | 0.1243 | 0.9629 | 0.9629 | | 0.041 | 219.05 | 4600 | 0.1305 | 0.9621 | 0.9621 | | 0.0397 | 228.57 | 4800 | 0.1294 | 0.9623 | 0.9623 | | 0.0377 | 238.1 | 5000 | 0.1368 | 0.9614 | 0.9614 | | 0.0359 | 247.62 | 5200 | 0.1357 | 0.9629 | 0.9629 | | 0.0351 | 257.14 | 5400 | 0.1399 | 0.9629 | 0.9629 | | 0.0332 | 266.67 | 5600 | 0.1379 | 0.9648 | 0.9648 | | 0.032 | 276.19 | 5800 | 0.1437 | 0.9646 | 0.9646 | | 0.0318 | 285.71 | 6000 | 0.1475 | 0.9614 | 0.9614 | | 0.0305 | 295.24 | 6200 | 0.1438 | 0.9610 | 0.9610 | | 0.0299 | 304.76 | 6400 | 0.1439 | 0.9634 | 0.9634 | | 0.0288 | 314.29 | 6600 | 0.1479 | 0.9617 | 0.9617 | | 0.0282 | 323.81 | 6800 | 0.1495 | 0.9636 | 0.9636 | | 0.0277 | 333.33 | 7000 | 0.1434 | 0.9634 | 0.9634 | | 0.0266 | 342.86 | 7200 | 0.1508 | 0.9623 | 0.9623 | | 0.0262 | 352.38 | 7400 | 0.1502 | 0.9644 | 0.9644 | | 0.0255 | 361.9 | 7600 | 0.1541 | 0.9634 | 0.9634 | | 0.025 | 371.43 | 7800 | 0.1513 | 0.9634 | 0.9634 | | 0.0243 | 380.95 | 8000 | 0.1548 | 0.9642 | 0.9642 | | 0.0241 | 390.48 | 8200 | 0.1564 | 0.9621 | 0.9621 | | 0.0235 | 400.0 | 8400 | 0.1561 | 0.9634 | 0.9634 | | 0.023 | 409.52 | 8600 | 0.1589 | 0.9633 | 0.9633 | | 0.0229 | 419.05 | 8800 | 0.1582 | 0.9631 | 0.9631 | | 0.0228 | 428.57 | 9000 | 0.1594 | 0.9631 | 0.9631 | | 0.0225 | 438.1 | 9200 | 0.1566 | 0.9634 | 0.9634 | | 0.0223 | 447.62 | 9400 | 0.1564 | 0.9646 | 0.9646 | | 0.0224 | 457.14 | 9600 | 0.1576 | 0.9638 | 0.9638 | | 0.0222 | 466.67 | 9800 | 0.1595 | 0.9631 | 0.9631 | | 0.0223 | 476.19 | 10000 | 0.1593 | 0.9636 | 0.9636 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_8192_512_17M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_8192_512_17M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_17M", "region:us" ]
null
2024-04-15T22:57:46+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
GUE\_prom\_prom\_300\_notata-seqsight\_8192\_512\_17M-L32\_all ============================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset. It achieves the following results on the evaluation set: * Loss: 0.1283 * F1 Score: 0.9576 * Accuracy: 0.9576 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tapt_helpfulness_base_pretraining_model_final This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4543 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 21 - eval_batch_size: 21 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.7697 | 1.0 | 232 | 1.5904 | | 1.6633 | 2.0 | 465 | 1.5650 | | 1.6314 | 3.0 | 697 | 1.5461 | | 1.594 | 4.0 | 930 | 1.5243 | | 1.5766 | 5.0 | 1162 | 1.5312 | | 1.5451 | 6.0 | 1395 | 1.5194 | | 1.5271 | 7.0 | 1627 | 1.5034 | | 1.5038 | 8.0 | 1860 | 1.5080 | | 1.4906 | 9.0 | 2092 | 1.4942 | | 1.4801 | 10.0 | 2325 | 1.4783 | | 1.4638 | 11.0 | 2557 | 1.4900 | | 1.4407 | 12.0 | 2790 | 1.4820 | | 1.4285 | 13.0 | 3022 | 1.4692 | | 1.4177 | 14.0 | 3255 | 1.4698 | | 1.4051 | 15.0 | 3487 | 1.4790 | | 1.3899 | 16.0 | 3720 | 1.4800 | | 1.3832 | 17.0 | 3952 | 1.4730 | | 1.3706 | 18.0 | 4185 | 1.4656 | | 1.3617 | 19.0 | 4417 | 1.4625 | | 1.3464 | 20.0 | 4650 | 1.4699 | | 1.3449 | 21.0 | 4882 | 1.4641 | | 1.3258 | 22.0 | 5115 | 1.4554 | | 1.3248 | 23.0 | 5347 | 1.4595 | | 1.3119 | 24.0 | 5580 | 1.4643 | | 1.3087 | 25.0 | 5812 | 1.4589 | | 1.2942 | 26.0 | 6045 | 1.4633 | | 1.2875 | 27.0 | 6277 | 1.4517 | | 1.2731 | 28.0 | 6510 | 1.4506 | | 1.2727 | 29.0 | 6742 | 1.4501 | | 1.261 | 30.0 | 6975 | 1.4492 | | 1.2559 | 31.0 | 7207 | 1.4553 | | 1.2437 | 32.0 | 7440 | 1.4429 | | 1.2404 | 33.0 | 7672 | 1.4456 | | 1.2301 | 34.0 | 7905 | 1.4497 | | 1.2277 | 35.0 | 8137 | 1.4400 | | 1.2154 | 36.0 | 8370 | 1.4491 | | 1.2118 | 37.0 | 8602 | 1.4521 | | 1.2022 | 38.0 | 8835 | 1.4362 | | 1.2027 | 39.0 | 9067 | 1.4431 | | 1.1883 | 40.0 | 9300 | 1.4526 | | 1.1861 | 41.0 | 9532 | 1.4596 | | 1.1747 | 42.0 | 9765 | 1.4390 | | 1.1708 | 43.0 | 9997 | 1.4501 | | 1.1636 | 44.0 | 10230 | 1.4549 | | 1.1623 | 45.0 | 10462 | 1.4616 | | 1.1569 | 46.0 | 10695 | 1.4379 | | 1.149 | 47.0 | 10927 | 1.4492 | | 1.1401 | 48.0 | 11160 | 1.4502 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "roberta-base", "model-index": [{"name": "tapt_helpfulness_base_pretraining_model_final", "results": []}]}
BigTMiami/tapt_helpfulness_base_pretraining_model_final
null
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T23:02:03+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
tapt\_helpfulness\_base\_pretraining\_model\_final ================================================== This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.4543 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 21 * eval\_batch\_size: 21 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 42 * optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 * lr\_scheduler\_type: linear * num\_epochs: 100 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral7b-pms-api_name This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.9894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.5758 | 1.0 | 1 | 4.7695 | | 4.5667 | 2.0 | 2 | 4.5682 | | 4.3892 | 3.0 | 3 | 4.1758 | | 4.0032 | 4.0 | 4 | 3.9894 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "mistral7b-pms-api_name", "results": []}]}
sharsun/mistral7b-pms-api_name
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-04-15T23:02:15+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
mistral7b-pms-api\_name ======================= This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 3.9894 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2 * num\_epochs: 4 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.38.2 * Pytorch 2.1.0+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
paulo037/checkpoint-45
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T23:05:54+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ruBert-base-sberquad-0.001-len_3-filtered-v2 This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.001-len_3-filtered-v2", "results": []}]}
Shalazary/ruBert-base-sberquad-0.001-len_3-filtered-v2
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-base", "license:apache-2.0", "region:us" ]
null
2024-04-15T23:07:37+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
# ruBert-base-sberquad-0.001-len_3-filtered-v2 This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# ruBert-base-sberquad-0.001-len_3-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n", "# ruBert-base-sberquad-0.001-len_3-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
null
References: https://github.com/Liuyuxinict/prenet
{"license": "mit"}
dand9999/nutrition_pred
null
[ "safetensors", "license:mit", "has_space", "region:us" ]
null
2024-04-15T23:09:00+00:00
[]
[]
TAGS #safetensors #license-mit #has_space #region-us
References: URL
[]
[ "TAGS\n#safetensors #license-mit #has_space #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_8192_512_17M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.4137 - F1 Score: 0.8069 - Accuracy: 0.8069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4931 | 8.33 | 200 | 0.4497 | 0.7922 | 0.7922 | | 0.4303 | 16.67 | 400 | 0.4406 | 0.7989 | 0.7990 | | 0.4185 | 25.0 | 600 | 0.4322 | 0.8018 | 0.8019 | | 0.4088 | 33.33 | 800 | 0.4259 | 0.8008 | 0.8008 | | 0.4035 | 41.67 | 1000 | 0.4256 | 0.8062 | 0.8063 | | 0.3982 | 50.0 | 1200 | 0.4218 | 0.8043 | 0.8044 | | 0.3926 | 58.33 | 1400 | 0.4253 | 0.8046 | 0.8046 | | 0.3872 | 66.67 | 1600 | 0.4327 | 0.8025 | 0.8029 | | 0.3836 | 75.0 | 1800 | 0.4216 | 0.8061 | 0.8061 | | 0.379 | 83.33 | 2000 | 0.4299 | 0.8047 | 0.8047 | | 0.3763 | 91.67 | 2200 | 0.4320 | 0.8036 | 0.8039 | | 0.372 | 100.0 | 2400 | 0.4321 | 0.8036 | 0.8037 | | 0.3666 | 108.33 | 2600 | 0.4347 | 0.8053 | 0.8054 | | 0.3645 | 116.67 | 2800 | 0.4332 | 0.8054 | 0.8054 | | 0.3599 | 125.0 | 3000 | 0.4343 | 0.8049 | 0.8049 | | 0.357 | 133.33 | 3200 | 0.4347 | 0.8036 | 0.8037 | | 0.3538 | 141.67 | 3400 | 0.4398 | 0.8034 | 0.8035 | | 0.351 | 150.0 | 3600 | 0.4430 | 0.8023 | 0.8025 | | 0.3469 | 158.33 | 3800 | 0.4468 | 0.8006 | 0.8007 | | 0.3436 | 166.67 | 4000 | 0.4444 | 0.8041 | 0.8042 | | 0.3403 | 175.0 | 4200 | 0.4478 | 0.7991 | 0.7995 | | 0.3376 | 183.33 | 4400 | 0.4485 | 0.8043 | 0.8044 | | 0.3341 | 191.67 | 4600 | 0.4478 | 0.8047 | 0.8047 | | 0.3329 | 200.0 | 4800 | 0.4460 | 0.8047 | 0.8049 | | 0.3283 | 208.33 | 5000 | 0.4504 | 0.8035 | 0.8035 | | 0.3261 | 216.67 | 5200 | 0.4528 | 0.8017 | 0.8019 | | 0.3247 | 225.0 | 5400 | 0.4541 | 0.8027 | 0.8029 | | 0.3212 | 233.33 | 5600 | 0.4617 | 0.8050 | 0.8051 | | 0.3196 | 241.67 | 5800 | 0.4509 | 0.8055 | 0.8056 | | 0.3157 | 250.0 | 6000 | 0.4659 | 0.8051 | 0.8052 | | 0.315 | 258.33 | 6200 | 0.4533 | 0.8042 | 0.8042 | | 0.3129 | 266.67 | 6400 | 0.4571 | 0.8044 | 0.8044 | | 0.311 | 275.0 | 6600 | 0.4570 | 0.8050 | 0.8051 | | 0.3085 | 283.33 | 6800 | 0.4573 | 0.8029 | 0.8030 | | 0.3062 | 291.67 | 7000 | 0.4643 | 0.8045 | 0.8046 | | 0.3053 | 300.0 | 7200 | 0.4657 | 0.8045 | 0.8046 | | 0.3037 | 308.33 | 7400 | 0.4652 | 0.8048 | 0.8049 | | 0.3023 | 316.67 | 7600 | 0.4707 | 0.8019 | 0.8020 | | 0.3015 | 325.0 | 7800 | 0.4741 | 0.8029 | 0.8030 | | 0.3004 | 333.33 | 8000 | 0.4718 | 0.8024 | 0.8025 | | 0.2972 | 341.67 | 8200 | 0.4754 | 0.8020 | 0.8022 | | 0.2983 | 350.0 | 8400 | 0.4726 | 0.8010 | 0.8010 | | 0.2966 | 358.33 | 8600 | 0.4766 | 0.8030 | 0.8030 | | 0.2963 | 366.67 | 8800 | 0.4759 | 0.8019 | 0.8020 | | 0.2963 | 375.0 | 9000 | 0.4745 | 0.8018 | 0.8019 | | 0.2945 | 383.33 | 9200 | 0.4767 | 0.8014 | 0.8015 | | 0.2935 | 391.67 | 9400 | 0.4745 | 0.8023 | 0.8024 | | 0.2939 | 400.0 | 9600 | 0.4761 | 0.8026 | 0.8027 | | 0.2935 | 408.33 | 9800 | 0.4765 | 0.8018 | 0.8019 | | 0.2931 | 416.67 | 10000 | 0.4768 | 0.8009 | 0.8010 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_8192_512_17M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_8192_512_17M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_17M", "region:us" ]
null
2024-04-15T23:12:50+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
GUE\_prom\_prom\_core\_all-seqsight\_8192\_512\_17M-L32\_all ============================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset. It achieves the following results on the evaluation set: * Loss: 0.4137 * F1 Score: 0.8069 * Accuracy: 0.8069 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# MaziyarPanahi/Calme-7B-Instruct-v0.9 AWQ - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9) <img src="https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/LzEf6vvq2qIiys-q7l9Hq.webp" width="550" /> ## Model Summary Calme-7B is a state-of-the-art language model with 7 billion parameters, fine-tuned over high-quality datasets on top of Mistral-7B. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence. ## How to use ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/Calme-7B-Instruct-v0.9-AWQ" system_message = "You are Calme, incarnated a powerful AI with everything." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Prompt template: ChatML ```plaintext <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
{"license": "apache-2.0", "tags": ["generated_from_trainer", "mistral", "7b", "calme", "finetuned", "quantized", "4-bit", "AWQ", "transformers", "pytorch", "mistral", "instruct", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "finetune", "chatml"], "inference": false, "model_creator": "MaziyarPanahi", "pipeline_tag": "text-generation", "quantized_by": "Suparious", "model-index": [{"name": "Calme-7B-Instruct-v0.9", "results": []}]}
solidrust/Calme-7B-Instruct-v0.9-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "7b", "calme", "finetuned", "quantized", "4-bit", "AWQ", "pytorch", "instruct", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "finetune", "chatml", "license:apache-2.0" ]
null
2024-04-15T23:14:22+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #generated_from_trainer #7b #calme #finetuned #quantized #4-bit #AWQ #pytorch #instruct #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #finetune #chatml #license-apache-2.0
# MaziyarPanahi/Calme-7B-Instruct-v0.9 AWQ - Model creator: MaziyarPanahi - Original model: Calme-7B-Instruct-v0.9 <img src="URL width="550" /> ## Model Summary Calme-7B is a state-of-the-art language model with 7 billion parameters, fine-tuned over high-quality datasets on top of Mistral-7B. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence. ## How to use ## How to use ### Install the necessary packages ### Example Python code ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - Text Generation Webui - using Loader: AutoAWQ - vLLM - version 0.2.2 or later for support for all model types. - Hugging Face Text Generation Inference (TGI) - Transformers version 4.35.0 and later, from any code or client that supports Transformers - AutoAWQ - for use from Python code ## Prompt template: ChatML
[ "# MaziyarPanahi/Calme-7B-Instruct-v0.9 AWQ\n\n- Model creator: MaziyarPanahi\n- Original model: Calme-7B-Instruct-v0.9\n\n<img src=\"URL width=\"550\" />", "## Model Summary\n\nCalme-7B is a state-of-the-art language model with 7 billion parameters, fine-tuned over high-quality datasets on top of Mistral-7B. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence.", "## How to use", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code", "## Prompt template: ChatML" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #generated_from_trainer #7b #calme #finetuned #quantized #4-bit #AWQ #pytorch #instruct #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #finetune #chatml #license-apache-2.0 \n", "# MaziyarPanahi/Calme-7B-Instruct-v0.9 AWQ\n\n- Model creator: MaziyarPanahi\n- Original model: Calme-7B-Instruct-v0.9\n\n<img src=\"URL width=\"550\" />", "## Model Summary\n\nCalme-7B is a state-of-the-art language model with 7 billion parameters, fine-tuned over high-quality datasets on top of Mistral-7B. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence.", "## How to use", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code", "## Prompt template: ChatML" ]
text-generation
transformers
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63cf23cffbd0cc580bc65c73/Kludqn78R4zztPL48g6QM.png) My first successful Dare-Ties merge. Because of the tokenizer difference of the model types (also bf16 vs f16), Had to use Slerp as well. Seems to perform well! Did a local lm-eval and HellaSWAG gives me around 84.5, which seems decent. will be submitting this for eval on the openLLM leaderboard as well. Preset for this should be ChatML, but standard default presets should work ok too. --- base_model: - senseable/WestLake-7B-v2 - cognitivecomputations/dolphin-2.8-mistral-7b-v02 library_name: transformers tags: - mergekit - merge --- # Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) as a base. ### Models Merged The following models were included in the merge: * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties parameters: int8_mask: true t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors embed_slerp: true models: - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 # No parameters necessary for base model - model: senseable/WestLake-7B-v2 parameters: density: 0.58 weight: 0.8 base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 tokenizer_source: model:senseable/WestLake-7B-v2 dtype: bfloat16 ```
{"license": "apache-2.0"}
Noodlz/DolphinLake-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:2311.03099", "arxiv:2306.01708", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T23:15:11+00:00
[ "2311.03099", "2306.01708" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-2311.03099 #arxiv-2306.01708 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!image/png My first successful Dare-Ties merge. Because of the tokenizer difference of the model types (also bf16 vs f16), Had to use Slerp as well. Seems to perform well! Did a local lm-eval and HellaSWAG gives me around 84.5, which seems decent. will be submitting this for eval on the openLLM leaderboard as well. Preset for this should be ChatML, but standard default presets should work ok too. --- base_model: - senseable/WestLake-7B-v2 - cognitivecomputations/dolphin-2.8-mistral-7b-v02 library_name: transformers tags: - mergekit - merge --- # Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the DARE TIES merge method using cognitivecomputations/dolphin-2.8-mistral-7b-v02 as a base. ### Models Merged The following models were included in the merge: * senseable/WestLake-7B-v2 ### Configuration The following YAML configuration was used to produce this model:
[ "# Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the DARE TIES merge method using cognitivecomputations/dolphin-2.8-mistral-7b-v02 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* senseable/WestLake-7B-v2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-2311.03099 #arxiv-2306.01708 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the DARE TIES merge method using cognitivecomputations/dolphin-2.8-mistral-7b-v02 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* senseable/WestLake-7B-v2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
null
CSGO Coach Mia, Finetuned on mistralai/Mistral-7B-Instruct-v0.2 Sample usage : from huggingface_hub import hf_hub_download from llama_cpp import Llama import torch # Specify the path to your .gguf file model_path = '/content/finetuned8b/finetuned8b.Q5_K_M.gguf' # Instantiate the Llama model llm = Llama(model_path=model_path) prompt = "Coach Mia, help me with aiming " ## Generation kwargs generation_kwargs = { "max_tokens":200, "stop":'[INST]', "echo":False, # Echo the prompt in the output "top_k":1 # This is essentially greedy decoding, since the model will always return the highest-probability token. Set this value > 1 for sampling decoding } res = llm(prompt, **generation_kwargs) ## Unpack and the generated text from the LLM response dictionary and print it print(res["choices"][0]["text"]) # res is short for result #output 100% accuracy. [/INST] Aiming is a crucial aspect of CS:GO. Let's start by analyzing your sensitivity settings and crosshair placement. We can also run some aim training drills to improve your precision.
{}
Pavan178/finetuned8b-GGUF
null
[ "gguf", "region:us" ]
null
2024-04-15T23:15:36+00:00
[]
[]
TAGS #gguf #region-us
CSGO Coach Mia, Finetuned on mistralai/Mistral-7B-Instruct-v0.2 Sample usage : from huggingface_hub import hf_hub_download from llama_cpp import Llama import torch # Specify the path to your .gguf file model_path = '/content/finetuned8b/finetuned8b.Q5_K_M.gguf' # Instantiate the Llama model llm = Llama(model_path=model_path) prompt = "Coach Mia, help me with aiming " ## Generation kwargs generation_kwargs = { "max_tokens":200, "stop":'[INST]', "echo":False, # Echo the prompt in the output "top_k":1 # This is essentially greedy decoding, since the model will always return the highest-probability token. Set this value > 1 for sampling decoding } res = llm(prompt, generation_kwargs) ## Unpack and the generated text from the LLM response dictionary and print it print(res["choices"][0]["text"]) # res is short for result #output 100% accuracy. [/INST] Aiming is a crucial aspect of CS:GO. Let's start by analyzing your sensitivity settings and crosshair placement. We can also run some aim training drills to improve your precision.
[ "# Specify the path to your .gguf file\nmodel_path = '/content/finetuned8b/finetuned8b.Q5_K_M.gguf'", "# Instantiate the Llama model\nllm = Llama(model_path=model_path)\n\nprompt = \"Coach Mia, help me with aiming \"", "## Generation kwargs\ngeneration_kwargs = {\n \"max_tokens\":200,\n \"stop\":'[INST]',\n \"echo\":False, # Echo the prompt in the output\n \"top_k\":1 # This is essentially greedy decoding, since the model will always return the highest-probability token. Set this value > 1 for sampling decoding\n}\n\nres = llm(prompt, generation_kwargs)", "## Unpack and the generated text from the LLM response dictionary and print it\nprint(res[\"choices\"][0][\"text\"])", "# res is short for result" ]
[ "TAGS\n#gguf #region-us \n", "# Specify the path to your .gguf file\nmodel_path = '/content/finetuned8b/finetuned8b.Q5_K_M.gguf'", "# Instantiate the Llama model\nllm = Llama(model_path=model_path)\n\nprompt = \"Coach Mia, help me with aiming \"", "## Generation kwargs\ngeneration_kwargs = {\n \"max_tokens\":200,\n \"stop\":'[INST]',\n \"echo\":False, # Echo the prompt in the output\n \"top_k\":1 # This is essentially greedy decoding, since the model will always return the highest-probability token. Set this value > 1 for sampling decoding\n}\n\nres = llm(prompt, generation_kwargs)", "## Unpack and the generated text from the LLM response dictionary and print it\nprint(res[\"choices\"][0][\"text\"])", "# res is short for result" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0039 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 17 | 0.1221 | 1.0 | | No log | 2.0 | 34 | 0.0076 | 1.0 | | No log | 3.0 | 51 | 0.0039 | 1.0 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "test_trainer", "results": []}]}
lsb/test_trainer
null
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-15T23:15:46+00:00
[]
[]
TAGS #transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
test\_trainer ============= This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0039 * Accuracy: 1.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yxs33220/llama-2-7b-model-0415500
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T23:19:11+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_notata-seqsight_8192_512_17M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.3842 - F1 Score: 0.8340 - Accuracy: 0.8340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4686 | 9.52 | 200 | 0.3799 | 0.8351 | 0.8351 | | 0.3926 | 19.05 | 400 | 0.3681 | 0.8379 | 0.8379 | | 0.3801 | 28.57 | 600 | 0.3657 | 0.8387 | 0.8387 | | 0.3719 | 38.1 | 800 | 0.3629 | 0.8398 | 0.8398 | | 0.3651 | 47.62 | 1000 | 0.3631 | 0.8428 | 0.8428 | | 0.3601 | 57.14 | 1200 | 0.3627 | 0.8430 | 0.8430 | | 0.3552 | 66.67 | 1400 | 0.3663 | 0.8398 | 0.8398 | | 0.3508 | 76.19 | 1600 | 0.3631 | 0.8387 | 0.8387 | | 0.347 | 85.71 | 1800 | 0.3673 | 0.8398 | 0.8400 | | 0.3416 | 95.24 | 2000 | 0.3629 | 0.8399 | 0.8400 | | 0.3379 | 104.76 | 2200 | 0.3683 | 0.8392 | 0.8393 | | 0.334 | 114.29 | 2400 | 0.3696 | 0.8380 | 0.8379 | | 0.3296 | 123.81 | 2600 | 0.3699 | 0.8406 | 0.8406 | | 0.3272 | 133.33 | 2800 | 0.3694 | 0.8374 | 0.8374 | | 0.3217 | 142.86 | 3000 | 0.3776 | 0.8365 | 0.8366 | | 0.3178 | 152.38 | 3200 | 0.3771 | 0.8392 | 0.8393 | | 0.3148 | 161.9 | 3400 | 0.3828 | 0.8369 | 0.8370 | | 0.3105 | 171.43 | 3600 | 0.3850 | 0.8378 | 0.8379 | | 0.3069 | 180.95 | 3800 | 0.3884 | 0.8353 | 0.8355 | | 0.3035 | 190.48 | 4000 | 0.3925 | 0.8350 | 0.8351 | | 0.2997 | 200.0 | 4200 | 0.3969 | 0.8366 | 0.8366 | | 0.2957 | 209.52 | 4400 | 0.4023 | 0.8324 | 0.8327 | | 0.293 | 219.05 | 4600 | 0.4016 | 0.8349 | 0.8349 | | 0.29 | 228.57 | 4800 | 0.4094 | 0.8334 | 0.8334 | | 0.2861 | 238.1 | 5000 | 0.4047 | 0.8331 | 0.8332 | | 0.2843 | 247.62 | 5200 | 0.4087 | 0.8345 | 0.8346 | | 0.2798 | 257.14 | 5400 | 0.4142 | 0.8314 | 0.8315 | | 0.2769 | 266.67 | 5600 | 0.4196 | 0.8354 | 0.8355 | | 0.2744 | 276.19 | 5800 | 0.4278 | 0.8321 | 0.8323 | | 0.2718 | 285.71 | 6000 | 0.4249 | 0.8331 | 0.8332 | | 0.2693 | 295.24 | 6200 | 0.4289 | 0.8328 | 0.8329 | | 0.2676 | 304.76 | 6400 | 0.4327 | 0.8325 | 0.8327 | | 0.2649 | 314.29 | 6600 | 0.4316 | 0.8328 | 0.8329 | | 0.2623 | 323.81 | 6800 | 0.4330 | 0.8355 | 0.8355 | | 0.2605 | 333.33 | 7000 | 0.4382 | 0.8328 | 0.8329 | | 0.257 | 342.86 | 7200 | 0.4405 | 0.8328 | 0.8329 | | 0.2565 | 352.38 | 7400 | 0.4456 | 0.8313 | 0.8314 | | 0.2551 | 361.9 | 7600 | 0.4469 | 0.8305 | 0.8306 | | 0.253 | 371.43 | 7800 | 0.4481 | 0.8316 | 0.8317 | | 0.2526 | 380.95 | 8000 | 0.4500 | 0.8303 | 0.8304 | | 0.251 | 390.48 | 8200 | 0.4448 | 0.8306 | 0.8308 | | 0.2496 | 400.0 | 8400 | 0.4553 | 0.8297 | 0.8298 | | 0.2483 | 409.52 | 8600 | 0.4547 | 0.8303 | 0.8304 | | 0.2474 | 419.05 | 8800 | 0.4551 | 0.8309 | 0.8310 | | 0.2472 | 428.57 | 9000 | 0.4570 | 0.8311 | 0.8312 | | 0.2461 | 438.1 | 9200 | 0.4591 | 0.8290 | 0.8291 | | 0.2463 | 447.62 | 9400 | 0.4599 | 0.8301 | 0.8302 | | 0.2452 | 457.14 | 9600 | 0.4590 | 0.8290 | 0.8291 | | 0.2455 | 466.67 | 9800 | 0.4586 | 0.8288 | 0.8289 | | 0.2442 | 476.19 | 10000 | 0.4595 | 0.8288 | 0.8289 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_8192_512_17M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_8192_512_17M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_17M", "region:us" ]
null
2024-04-15T23:20:11+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
GUE\_prom\_prom\_core\_notata-seqsight\_8192\_512\_17M-L32\_all =============================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset. It achieves the following results on the evaluation set: * Loss: 0.3842 * F1 Score: 0.8340 * Accuracy: 0.8340 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
This is a llamafile for [WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B). Converted and tested on 4/15/2024. Safetensors came from microsoft's hf, quantized with llama.cpp, zipaligned with llamafile. The q3-k-l sized quant is under 4gb if you want something to share with your windows-only users. Quality is higher than the average high school student. Instructions to run q3-k-l on Windows: Just download, add '.exe' to the filename, and open it. Bypass all friendly Microsoft warnings about using your own computer. It doesn't need network access, completely local. Put it on a keychain! Share with friends! Perfect gift for significant other! other usage notes: Anything larger than the q3-k-l is going to be over 4gb and won't run as an .exe in Windows. You'll need to use WSL, or another operating system. WSL: If you get the error about APE, and the recommended command sudo sh -c 'echo -1 > /proc/sys/fs/binfmt_misc/WSLInterop' doesn't work, the WSLInterop file might be named something else. I had success with sudo sh -c 'echo -1 > /proc/sys/fs/binfmt_misc/WSLInterop-late' If that fails too, just navigate to /proc/sys/fs/binfmt_msc and see what files look like WSLInterop and echo a -1 to whatever they're called by changing that part of the recommended command. size note: Use q8_0, it's good. -= Llamafile =- Llamafiles are a standalone executable that run an LLM server locally on a variety of operating systems including FreeBSD, Windows, Windows via WSL, Linux, and Mac. The same file works everywhere, I've tested several of these on FreeBSD, Windows, Windows via WSL, and Linux. You just download the .llamafile, (chmod +x or rename to .exe as needed), run it, open the chat interface in a browser, and interact. Options can be passed in to expose the api etc. See their [docs](https://github.com/Mozilla-Ocho/llamafile) for details. [Mozilla Blog Announcement for Llamafile](https://hacks.mozilla.org/2023/11/introducing-llamafile/) - Windows: I tried the q3-k-l, it works. - FreeBSD note: Yes, it actually works on a fresh install of FreeBSD.
{"license": "apache-2.0"}
gobean/WizardLM-2-7B.llamafile
null
[ "llamafile", "license:apache-2.0", "region:us" ]
null
2024-04-15T23:30:57+00:00
[]
[]
TAGS #llamafile #license-apache-2.0 #region-us
This is a llamafile for WizardLM-2-7B. Converted and tested on 4/15/2024. Safetensors came from microsoft's hf, quantized with URL, zipaligned with llamafile. The q3-k-l sized quant is under 4gb if you want something to share with your windows-only users. Quality is higher than the average high school student. Instructions to run q3-k-l on Windows: Just download, add '.exe' to the filename, and open it. Bypass all friendly Microsoft warnings about using your own computer. It doesn't need network access, completely local. Put it on a keychain! Share with friends! Perfect gift for significant other! other usage notes: Anything larger than the q3-k-l is going to be over 4gb and won't run as an .exe in Windows. You'll need to use WSL, or another operating system. WSL: If you get the error about APE, and the recommended command sudo sh -c 'echo -1 > /proc/sys/fs/binfmt_misc/WSLInterop' doesn't work, the WSLInterop file might be named something else. I had success with sudo sh -c 'echo -1 > /proc/sys/fs/binfmt_misc/WSLInterop-late' If that fails too, just navigate to /proc/sys/fs/binfmt_msc and see what files look like WSLInterop and echo a -1 to whatever they're called by changing that part of the recommended command. size note: Use q8_0, it's good. -= Llamafile =- Llamafiles are a standalone executable that run an LLM server locally on a variety of operating systems including FreeBSD, Windows, Windows via WSL, Linux, and Mac. The same file works everywhere, I've tested several of these on FreeBSD, Windows, Windows via WSL, and Linux. You just download the .llamafile, (chmod +x or rename to .exe as needed), run it, open the chat interface in a browser, and interact. Options can be passed in to expose the api etc. See their docs for details. Mozilla Blog Announcement for Llamafile - Windows: I tried the q3-k-l, it works. - FreeBSD note: Yes, it actually works on a fresh install of FreeBSD.
[]
[ "TAGS\n#llamafile #license-apache-2.0 #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qa_kor_math_2 This model is a fine-tuned version of [hyunwoongko/kobart](https://huggingface.co/hyunwoongko/kobart) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.56 | 100 | 3.2887 | | No log | 1.13 | 200 | 0.8359 | | No log | 1.69 | 300 | 0.4944 | | No log | 2.26 | 400 | 0.3843 | | 2.4704 | 2.82 | 500 | 0.3349 | | 2.4704 | 3.39 | 600 | 0.3005 | | 2.4704 | 3.95 | 700 | 0.2768 | | 2.4704 | 4.52 | 800 | 0.2641 | | 2.4704 | 5.08 | 900 | 0.2479 | | 0.3213 | 5.65 | 1000 | 0.2335 | | 0.3213 | 6.21 | 1100 | 0.2208 | | 0.3213 | 6.78 | 1200 | 0.2117 | | 0.3213 | 7.34 | 1300 | 0.2041 | | 0.3213 | 7.91 | 1400 | 0.1964 | | 0.2503 | 8.47 | 1500 | 0.1876 | | 0.2503 | 9.04 | 1600 | 0.1790 | | 0.2503 | 9.6 | 1700 | 0.1745 | | 0.2503 | 10.17 | 1800 | 0.1673 | | 0.2503 | 10.73 | 1900 | 0.1623 | | 0.2141 | 11.3 | 2000 | 0.1579 | | 0.2141 | 11.86 | 2100 | 0.1527 | | 0.2141 | 12.43 | 2200 | 0.1494 | | 0.2141 | 12.99 | 2300 | 0.1438 | | 0.2141 | 13.56 | 2400 | 0.1427 | | 0.1873 | 14.12 | 2500 | 0.1386 | | 0.1873 | 14.69 | 2600 | 0.1347 | | 0.1873 | 15.25 | 2700 | 0.1334 | | 0.1873 | 15.82 | 2800 | 0.1321 | | 0.1873 | 16.38 | 2900 | 0.1295 | | 0.1718 | 16.95 | 3000 | 0.1276 | | 0.1718 | 17.51 | 3100 | 0.1263 | | 0.1718 | 18.08 | 3200 | 0.1255 | | 0.1718 | 18.64 | 3300 | 0.1244 | | 0.1718 | 19.21 | 3400 | 0.1240 | | 0.1628 | 19.77 | 3500 | 0.1234 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "hyunwoongko/kobart", "model-index": [{"name": "qa_kor_math_2", "results": []}]}
idah4/qa_kor_math_2
null
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:hyunwoongko/kobart", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T23:33:52+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-hyunwoongko/kobart #license-mit #autotrain_compatible #endpoints_compatible #region-us
qa\_kor\_math\_2 ================ This model is a fine-tuned version of hyunwoongko/kobart on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1234 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 400 * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-hyunwoongko/kobart #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with GGUF. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***What is the model format?*** We use GGUF format. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). # Downloading and running the models You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/): | Quant type | Description | |------------|--------------------------------------------------------------------------------------------| | Q5_K_M | High quality, recommended. | | Q5_K_S | High quality, recommended. | | Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. | | Q4_K_S | Slightly lower quality with more space savings, recommended. | | IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. | | IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. | | Q3_K_L | Lower quality but usable, good for low RAM availability. | | Q3_K_M | Even lower quality. | | IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | Q3_K_S | Low quality, not recommended. | | IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | Q2_K | Very low quality but surprisingly usable. | ## How to download GGUF files ? **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev - **Option A** - Downloading in `text-generation-webui`: - **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Poro-34B-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf. - **Step 2**: Then click Download. - **Option B** - Downloading on the command line (including multiple files at once): - **Step 1**: We recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` - **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download PrunaAI/Poro-34B-GGUF-smashed-smashed Poro-34B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> Alternatively, you can also download multiple files at once with a pattern: ```shell huggingface-cli download PrunaAI/Poro-34B-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Poro-34B-GGUF-smashed-smashed Poro-34B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## How to run model in GGUF format? - **Option A** - Introductory example with `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Poro-34B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) - **Option B** - Running in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). - **Option C** - Running from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Poro-34B.IQ3_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<s>[INST] {prompt} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Poro-34B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` - **Option D** - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/Poro-34B-GGUF-smashed
null
[ "gguf", "pruna-ai", "region:us" ]
null
2024-04-15T23:35:31+00:00
[]
[]
TAGS #gguf #pruna-ai #region-us
[![](https://i.URL alt=)](URL target=) ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL Simply make AI models cheaper, smaller, faster, and greener! ============================================================ * Give a thumbs up if you like this model! * Contact us and tell us which model to compress next here. * Request access to easily compress your *own* AI models here. * Read the documentations to know more here * Join Pruna AI community on Discord here to share feedback/suggestions or get help. Frequently Asked Questions * *How does the compression work?* The model is compressed with GGUF. * *How does the model quality change?* The quality of the model output might vary compared to the base model. * *What is the model format?* We use GGUF format. * *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. * *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. Downloading and running the models ================================== You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout this chart and this guide: How to download GGUF files ? ---------------------------- Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * URL * Option A - Downloading in 'text-generation-webui': * Step 1: Under Download Model, you can enter the model repo: PrunaAI/Poro-34B-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3\_M.gguf. * Step 2: Then click Download. * Option B - Downloading on the command line (including multiple files at once): * Step 1: We recommend using the 'huggingface-hub' Python library: * Step 2: Then you can download any individual model file to the current directory, at high speed, with a command like this: More advanced huggingface-cli download usage (click to read) Alternatively, you can also download multiple files at once with a pattern: For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\_transfer': And set environment variable 'HF\_HUB\_ENABLE\_HF\_TRANSFER' to '1': Windows Command Line users: You can set the environment variable by running 'set HF\_HUB\_ENABLE\_HF\_TRANSFER=1' before the download command. How to run model in GGUF format? -------------------------------- * Option A - Introductory example with 'URL' command Make sure you are using 'URL' from commit d0cee0d or later. Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the '-p ' argument with '-i -ins' For other parameters and how to use them, please refer to the URL documentation * Option B - Running in 'text-generation-webui' Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL. * Option C - Running from Python code You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ``` ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: llama-cpp-python docs. #### First install the package Run one of the following commands, according to your system: #### Simple llama-cpp-python example code ``` * Option D - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * LangChain + llama-cpp-python * LangChain + ctransformers Configurations -------------- The configuration info are in 'smash\_config.json'. Credits & License ----------------- The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. Want to compress other models? ------------------------------ * Contact us and tell us which model to compress next here. * Request access to easily compress your own AI models here.
[ "### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here." ]
[ "TAGS\n#gguf #pruna-ai #region-us \n", "### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here." ]