pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4510 - F1 Score: 0.8058 - Accuracy: 0.8069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5422 | 1.1 | 200 | 0.4758 | 0.7965 | 0.7965 | | 0.4821 | 2.21 | 400 | 0.4710 | 0.7835 | 0.7861 | | 0.4747 | 3.31 | 600 | 0.4565 | 0.7964 | 0.7979 | | 0.4647 | 4.42 | 800 | 0.4572 | 0.7934 | 0.7951 | | 0.4645 | 5.52 | 1000 | 0.4557 | 0.7929 | 0.7951 | | 0.4614 | 6.63 | 1200 | 0.4628 | 0.7889 | 0.7916 | | 0.4621 | 7.73 | 1400 | 0.4461 | 0.8014 | 0.8027 | | 0.4547 | 8.84 | 1600 | 0.4527 | 0.7977 | 0.7996 | | 0.4537 | 9.94 | 1800 | 0.4395 | 0.8081 | 0.8083 | | 0.4516 | 11.05 | 2000 | 0.4459 | 0.8015 | 0.8031 | | 0.4515 | 12.15 | 2200 | 0.4448 | 0.8050 | 0.8065 | | 0.4441 | 13.26 | 2400 | 0.4411 | 0.8043 | 0.8058 | | 0.4443 | 14.36 | 2600 | 0.4351 | 0.8100 | 0.8107 | | 0.446 | 15.47 | 2800 | 0.4485 | 0.8039 | 0.8062 | | 0.4377 | 16.57 | 3000 | 0.4372 | 0.8120 | 0.8128 | | 0.4418 | 17.68 | 3200 | 0.4359 | 0.8077 | 0.8089 | | 0.4363 | 18.78 | 3400 | 0.4334 | 0.8113 | 0.8121 | | 0.4382 | 19.89 | 3600 | 0.4475 | 0.8038 | 0.8062 | | 0.4384 | 20.99 | 3800 | 0.4373 | 0.8133 | 0.8145 | | 0.4371 | 22.1 | 4000 | 0.4314 | 0.8141 | 0.8152 | | 0.4329 | 23.2 | 4200 | 0.4370 | 0.8132 | 0.8145 | | 0.4339 | 24.31 | 4400 | 0.4396 | 0.8140 | 0.8152 | | 0.4243 | 25.41 | 4600 | 0.4385 | 0.8172 | 0.8183 | | 0.4343 | 26.52 | 4800 | 0.4316 | 0.8182 | 0.8190 | | 0.4348 | 27.62 | 5000 | 0.4331 | 0.8163 | 0.8173 | | 0.4299 | 28.73 | 5200 | 0.4311 | 0.8173 | 0.8183 | | 0.4267 | 29.83 | 5400 | 0.4314 | 0.8197 | 0.8204 | | 0.4292 | 30.94 | 5600 | 0.4313 | 0.8195 | 0.8200 | | 0.43 | 32.04 | 5800 | 0.4294 | 0.8199 | 0.8204 | | 0.4274 | 33.15 | 6000 | 0.4303 | 0.8185 | 0.8193 | | 0.4269 | 34.25 | 6200 | 0.4321 | 0.8203 | 0.8211 | | 0.4245 | 35.36 | 6400 | 0.4364 | 0.8174 | 0.8187 | | 0.4279 | 36.46 | 6600 | 0.4300 | 0.8190 | 0.8197 | | 0.4274 | 37.57 | 6800 | 0.4305 | 0.8197 | 0.8204 | | 0.4224 | 38.67 | 7000 | 0.4324 | 0.8200 | 0.8207 | | 0.4219 | 39.78 | 7200 | 0.4313 | 0.8195 | 0.8200 | | 0.4271 | 40.88 | 7400 | 0.4305 | 0.8190 | 0.8197 | | 0.4229 | 41.99 | 7600 | 0.4325 | 0.8185 | 0.8193 | | 0.4239 | 43.09 | 7800 | 0.4347 | 0.8163 | 0.8173 | | 0.4233 | 44.2 | 8000 | 0.4332 | 0.8173 | 0.8183 | | 0.4222 | 45.3 | 8200 | 0.4327 | 0.8176 | 0.8183 | | 0.4211 | 46.41 | 8400 | 0.4327 | 0.8189 | 0.8197 | | 0.4243 | 47.51 | 8600 | 0.4311 | 0.8193 | 0.8200 | | 0.4216 | 48.62 | 8800 | 0.4329 | 0.8183 | 0.8193 | | 0.4194 | 49.72 | 9000 | 0.4343 | 0.8192 | 0.8200 | | 0.4243 | 50.83 | 9200 | 0.4337 | 0.8191 | 0.8200 | | 0.4224 | 51.93 | 9400 | 0.4321 | 0.8175 | 0.8183 | | 0.4203 | 53.04 | 9600 | 0.4321 | 0.8172 | 0.8180 | | 0.419 | 54.14 | 9800 | 0.4316 | 0.8173 | 0.8180 | | 0.4199 | 55.25 | 10000 | 0.4327 | 0.8175 | 0.8183 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:11:23+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_EMP\_H3K79me3-seqsight\_16384\_512\_22M-L1\_f ================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4510 * F1 Score: 0.8058 * Accuracy: 0.8069 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi_LoRA_2 <Gallery /> ## Model description These are embracellm/sushi_LoRA_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sushi to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi_LoRA_2/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of sushi", "widget": []}
embracellm/sushi_LoRA_2
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-27T08:12:40+00:00
[]
[]
TAGS #diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - embracellm/sushi_LoRA_2 <Gallery /> ## Model description These are embracellm/sushi_LoRA_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sushi to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - embracellm/sushi_LoRA_2\n\n<Gallery />", "## Model description\n\nThese are embracellm/sushi_LoRA_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of sushi to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - embracellm/sushi_LoRA_2\n\n<Gallery />", "## Model description\n\nThese are embracellm/sushi_LoRA_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of sushi to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
text-generation
transformers
# stablelm-2-zephyr-1.6b-slerpx2 stablelm-2-zephyr-1.6b-slerpx2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [aipib/stablelm-2-1_6b_slerpmerge](https://huggingface.co/aipib/stablelm-2-1_6b_slerpmerge) * [stabilityai/stablelm-2-1_6b](https://huggingface.co/stabilityai/stablelm-2-1_6b) ## 🧩 Configuration ```yaml slices: - sources: - model: aipib/stablelm-2-1_6b_slerpmerge layer_range: [0, 24] - model: stabilityai/stablelm-2-1_6b layer_range: [0, 24] merge_method: slerp base_model: aipib/stablelm-2-1_6b_slerpmerge parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "aipib/stablelm-2-zephyr-1.6b-slerpx2" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "aipib/stablelm-2-1_6b_slerpmerge", "stabilityai/stablelm-2-1_6b"], "base_model": ["aipib/stablelm-2-1_6b_slerpmerge", "stabilityai/stablelm-2-1_6b"]}
aipib/stablelm-2-zephyr-1.6b-slerpx2
null
[ "transformers", "safetensors", "stablelm", "text-generation", "merge", "mergekit", "lazymergekit", "aipib/stablelm-2-1_6b_slerpmerge", "stabilityai/stablelm-2-1_6b", "conversational", "base_model:aipib/stablelm-2-1_6b_slerpmerge", "base_model:stabilityai/stablelm-2-1_6b", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:13:01+00:00
[]
[]
TAGS #transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #aipib/stablelm-2-1_6b_slerpmerge #stabilityai/stablelm-2-1_6b #conversational #base_model-aipib/stablelm-2-1_6b_slerpmerge #base_model-stabilityai/stablelm-2-1_6b #autotrain_compatible #endpoints_compatible #region-us
# stablelm-2-zephyr-1.6b-slerpx2 stablelm-2-zephyr-1.6b-slerpx2 is a merge of the following models using LazyMergekit: * aipib/stablelm-2-1_6b_slerpmerge * stabilityai/stablelm-2-1_6b ## Configuration ## Usage
[ "# stablelm-2-zephyr-1.6b-slerpx2\n\nstablelm-2-zephyr-1.6b-slerpx2 is a merge of the following models using LazyMergekit:\n* aipib/stablelm-2-1_6b_slerpmerge\n* stabilityai/stablelm-2-1_6b", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #aipib/stablelm-2-1_6b_slerpmerge #stabilityai/stablelm-2-1_6b #conversational #base_model-aipib/stablelm-2-1_6b_slerpmerge #base_model-stabilityai/stablelm-2-1_6b #autotrain_compatible #endpoints_compatible #region-us \n", "# stablelm-2-zephyr-1.6b-slerpx2\n\nstablelm-2-zephyr-1.6b-slerpx2 is a merge of the following models using LazyMergekit:\n* aipib/stablelm-2-1_6b_slerpmerge\n* stabilityai/stablelm-2-1_6b", "## Configuration", "## Usage" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_5iters_bs256_nodpo_only4w_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_5iters_bs256_nodpo_only4w_iter_1", "results": []}]}
ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:13:33+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_5iters_bs256_nodpo_only4w_iter_1 This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.001_5iters_bs256_nodpo_only4w_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_5iters_bs256_nodpo_only4w_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ytcheng/bert-base-chinese
null
[ "transformers", "safetensors", "bert", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:13:37+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
eduard76/idefics-9b-test
null
[ "transformers", "safetensors", "idefics", "pretraining", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T08:14:37+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #idefics #pretraining #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #idefics #pretraining #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# EnverLee/DAVinCI-42dot_LLM-PLM-1.3B-v1.2-Q4_0-GGUF This model was converted to GGUF format from [`jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.2`](https://huggingface.co/jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo EnverLee/DAVinCI-42dot_LLM-PLM-1.3B-v1.2-Q4_0-GGUF --model davinci-42dot_llm-plm-1.3b-v1.2.Q4_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo EnverLee/DAVinCI-42dot_LLM-PLM-1.3B-v1.2-Q4_0-GGUF --model davinci-42dot_llm-plm-1.3b-v1.2.Q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m davinci-42dot_llm-plm-1.3b-v1.2.Q4_0.gguf -n 128 ```
{"license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"]}
EnverLee/DAVinCI-42dot_LLM-PLM-1.3B-v1.2-Q4_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-27T08:15:35+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us
# EnverLee/DAVinCI-42dot_LLM-PLM-1.3B-v1.2-Q4_0-GGUF This model was converted to GGUF format from 'jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# EnverLee/DAVinCI-42dot_LLM-PLM-1.3B-v1.2-Q4_0-GGUF\nThis model was converted to GGUF format from 'jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us \n", "# EnverLee/DAVinCI-42dot_LLM-PLM-1.3B-v1.2-Q4_0-GGUF\nThis model was converted to GGUF format from 'jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4452 - F1 Score: 0.8154 - Accuracy: 0.8155 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5215 | 1.1 | 200 | 0.4615 | 0.8030 | 0.8031 | | 0.4709 | 2.21 | 400 | 0.4541 | 0.7966 | 0.7982 | | 0.4622 | 3.31 | 600 | 0.4492 | 0.7985 | 0.8003 | | 0.4502 | 4.42 | 800 | 0.4476 | 0.7970 | 0.7989 | | 0.4472 | 5.52 | 1000 | 0.4517 | 0.7965 | 0.7992 | | 0.4422 | 6.63 | 1200 | 0.4515 | 0.8038 | 0.8062 | | 0.4408 | 7.73 | 1400 | 0.4379 | 0.8084 | 0.8100 | | 0.4331 | 8.84 | 1600 | 0.4504 | 0.8043 | 0.8062 | | 0.4304 | 9.94 | 1800 | 0.4252 | 0.8158 | 0.8159 | | 0.4271 | 11.05 | 2000 | 0.4387 | 0.8131 | 0.8141 | | 0.4259 | 12.15 | 2200 | 0.4249 | 0.8224 | 0.8225 | | 0.4208 | 13.26 | 2400 | 0.4313 | 0.8171 | 0.8180 | | 0.4204 | 14.36 | 2600 | 0.4253 | 0.8241 | 0.8245 | | 0.4191 | 15.47 | 2800 | 0.4306 | 0.8184 | 0.8193 | | 0.4117 | 16.57 | 3000 | 0.4295 | 0.8197 | 0.8197 | | 0.4177 | 17.68 | 3200 | 0.4229 | 0.8227 | 0.8232 | | 0.4109 | 18.78 | 3400 | 0.4307 | 0.8175 | 0.8183 | | 0.4111 | 19.89 | 3600 | 0.4274 | 0.8189 | 0.8197 | | 0.4112 | 20.99 | 3800 | 0.4294 | 0.8148 | 0.8159 | | 0.4094 | 22.1 | 4000 | 0.4229 | 0.8199 | 0.8204 | | 0.4058 | 23.2 | 4200 | 0.4264 | 0.8195 | 0.8200 | | 0.4058 | 24.31 | 4400 | 0.4353 | 0.8152 | 0.8162 | | 0.3953 | 25.41 | 4600 | 0.4305 | 0.8202 | 0.8204 | | 0.4061 | 26.52 | 4800 | 0.4268 | 0.8212 | 0.8214 | | 0.4051 | 27.62 | 5000 | 0.4254 | 0.8185 | 0.8190 | | 0.3985 | 28.73 | 5200 | 0.4258 | 0.8171 | 0.8176 | | 0.3966 | 29.83 | 5400 | 0.4238 | 0.8209 | 0.8211 | | 0.3972 | 30.94 | 5600 | 0.4277 | 0.8235 | 0.8239 | | 0.3984 | 32.04 | 5800 | 0.4262 | 0.8260 | 0.8259 | | 0.395 | 33.15 | 6000 | 0.4265 | 0.8187 | 0.8193 | | 0.3924 | 34.25 | 6200 | 0.4334 | 0.8185 | 0.8187 | | 0.3933 | 35.36 | 6400 | 0.4317 | 0.8188 | 0.8193 | | 0.3938 | 36.46 | 6600 | 0.4265 | 0.8207 | 0.8211 | | 0.3934 | 37.57 | 6800 | 0.4268 | 0.8181 | 0.8187 | | 0.3892 | 38.67 | 7000 | 0.4311 | 0.8143 | 0.8152 | | 0.3862 | 39.78 | 7200 | 0.4302 | 0.8203 | 0.8207 | | 0.3933 | 40.88 | 7400 | 0.4250 | 0.8233 | 0.8235 | | 0.3871 | 41.99 | 7600 | 0.4284 | 0.8200 | 0.8204 | | 0.3871 | 43.09 | 7800 | 0.4346 | 0.8174 | 0.8180 | | 0.3872 | 44.2 | 8000 | 0.4330 | 0.8145 | 0.8152 | | 0.3853 | 45.3 | 8200 | 0.4323 | 0.8163 | 0.8166 | | 0.3838 | 46.41 | 8400 | 0.4335 | 0.8180 | 0.8183 | | 0.387 | 47.51 | 8600 | 0.4323 | 0.8188 | 0.8193 | | 0.384 | 48.62 | 8800 | 0.4316 | 0.8152 | 0.8159 | | 0.3823 | 49.72 | 9000 | 0.4336 | 0.8157 | 0.8162 | | 0.3865 | 50.83 | 9200 | 0.4336 | 0.8166 | 0.8173 | | 0.3844 | 51.93 | 9400 | 0.4317 | 0.8171 | 0.8176 | | 0.3816 | 53.04 | 9600 | 0.4324 | 0.8183 | 0.8187 | | 0.3823 | 54.14 | 9800 | 0.4319 | 0.8190 | 0.8193 | | 0.3797 | 55.25 | 10000 | 0.4332 | 0.8164 | 0.8169 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:16:16+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_EMP\_H3K79me3-seqsight\_16384\_512\_22M-L8\_f ================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4452 * F1 Score: 0.8154 * Accuracy: 0.8155 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Manavshah/llama4-dolphin-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/llama4-dolphin-8B-GGUF/resolve/main/llama4-dolphin-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "Manavshah/llama4-dolphin-8B", "quantized_by": "mradermacher"}
mradermacher/llama4-dolphin-8B-GGUF
null
[ "transformers", "gguf", "en", "base_model:Manavshah/llama4-dolphin-8B", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:17:50+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-Manavshah/llama4-dolphin-8B #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-Manavshah/llama4-dolphin-8B #endpoints_compatible #region-us \n" ]
text-generation
transformers
# stablelm-2-zephyr-1.6b-slerpx3 stablelm-2-zephyr-1.6b-slerpx3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [aipib/stablelm-2-zephyr-1.6b-slerpx2](https://huggingface.co/aipib/stablelm-2-zephyr-1.6b-slerpx2) * [aipib/stablelm-2-1_6b_slerpmerge](https://huggingface.co/aipib/stablelm-2-1_6b_slerpmerge) ## 🧩 Configuration ```yaml slices: - sources: - model: aipib/stablelm-2-zephyr-1.6b-slerpx2 layer_range: [0, 24] - model: aipib/stablelm-2-1_6b_slerpmerge layer_range: [0, 24] merge_method: slerp base_model: aipib/stablelm-2-zephyr-1.6b-slerpx2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "aipib/stablelm-2-zephyr-1.6b-slerpx3" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "aipib/stablelm-2-zephyr-1.6b-slerpx2", "aipib/stablelm-2-1_6b_slerpmerge"], "base_model": ["aipib/stablelm-2-zephyr-1.6b-slerpx2", "aipib/stablelm-2-1_6b_slerpmerge"]}
aipib/stablelm-2-zephyr-1.6b-slerpx3
null
[ "transformers", "safetensors", "stablelm", "text-generation", "merge", "mergekit", "lazymergekit", "aipib/stablelm-2-zephyr-1.6b-slerpx2", "aipib/stablelm-2-1_6b_slerpmerge", "conversational", "base_model:aipib/stablelm-2-zephyr-1.6b-slerpx2", "base_model:aipib/stablelm-2-1_6b_slerpmerge", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:19:52+00:00
[]
[]
TAGS #transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #aipib/stablelm-2-zephyr-1.6b-slerpx2 #aipib/stablelm-2-1_6b_slerpmerge #conversational #base_model-aipib/stablelm-2-zephyr-1.6b-slerpx2 #base_model-aipib/stablelm-2-1_6b_slerpmerge #autotrain_compatible #endpoints_compatible #region-us
# stablelm-2-zephyr-1.6b-slerpx3 stablelm-2-zephyr-1.6b-slerpx3 is a merge of the following models using LazyMergekit: * aipib/stablelm-2-zephyr-1.6b-slerpx2 * aipib/stablelm-2-1_6b_slerpmerge ## Configuration ## Usage
[ "# stablelm-2-zephyr-1.6b-slerpx3\n\nstablelm-2-zephyr-1.6b-slerpx3 is a merge of the following models using LazyMergekit:\n* aipib/stablelm-2-zephyr-1.6b-slerpx2\n* aipib/stablelm-2-1_6b_slerpmerge", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #aipib/stablelm-2-zephyr-1.6b-slerpx2 #aipib/stablelm-2-1_6b_slerpmerge #conversational #base_model-aipib/stablelm-2-zephyr-1.6b-slerpx2 #base_model-aipib/stablelm-2-1_6b_slerpmerge #autotrain_compatible #endpoints_compatible #region-us \n", "# stablelm-2-zephyr-1.6b-slerpx3\n\nstablelm-2-zephyr-1.6b-slerpx3 is a merge of the following models using LazyMergekit:\n* aipib/stablelm-2-zephyr-1.6b-slerpx2\n* aipib/stablelm-2-1_6b_slerpmerge", "## Configuration", "## Usage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cmpktheo/Droid
null
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:20:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# rakib72642/Face_Detection_Logic # HuggingFace: https://huggingface.co/rakib72642/Face_Detection_Logic # Setup Global API sudo apt install iproute2 -y && sudo apt install wget -y && sudo apt install unzip -y && sudo apt install unzip -y && sudo apt install nvtop -y && sudo apt-get install git-all -y && sudo apt-get install git-lfs -y && sudo apt-get update && sudo apt-get install libgl1 -y && sudo apt install curl -y && curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list && sudo apt update && sudo apt install ngrok -y && sudo apt update && sudo apt upgrade -y && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && ngrok http --domain=he.ngrok.app 8060 # Setup Local API git clone https://huggingface.co/rakib72642/Face_Detection_Logic && cd Face_Detection_Logic && pip install -r requirements.txt && sudo apt update && sudo apt upgrade -y && python face_logic_api.py cd Face_Detection_Logic && python face_logic_api.py # hypercorn face_logic_api:app --bind 127.0.0.1:8060 --workers 4
{}
rakib72642/Face_Detection_Logic
null
[ "doi:10.57967/hf/2132", "region:us" ]
null
2024-04-27T08:21:11+00:00
[]
[]
TAGS #doi-10.57967/hf/2132 #region-us
# rakib72642/Face_Detection_Logic # HuggingFace: URL # Setup Global API sudo apt install iproute2 -y && sudo apt install wget -y && sudo apt install unzip -y && sudo apt install unzip -y && sudo apt install nvtop -y && sudo apt-get install git-all -y && sudo apt-get install git-lfs -y && sudo apt-get update && sudo apt-get install libgl1 -y && sudo apt install curl -y && curl -s URL | sudo tee /etc/apt/URL.d/URL >/dev/null && echo "deb URL buster main" | sudo tee /etc/apt/URL.d/URL && sudo apt update && sudo apt install ngrok -y && sudo apt update && sudo apt upgrade -y && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && ngrok http --domain=URL 8060 # Setup Local API git clone URL && cd Face_Detection_Logic && pip install -r URL && sudo apt update && sudo apt upgrade -y && python face_logic_api.py cd Face_Detection_Logic && python face_logic_api.py # hypercorn face_logic_api:app --bind 127.0.0.1:8060 --workers 4
[ "# rakib72642/Face_Detection_Logic", "# HuggingFace: URL", "# Setup Global API\n\nsudo apt install iproute2 -y && sudo apt install wget -y && sudo apt install unzip -y && sudo apt install unzip -y && sudo apt install nvtop -y && sudo apt-get install git-all -y && sudo apt-get install git-lfs -y && sudo apt-get update && sudo apt-get install libgl1 -y && sudo apt install curl -y && curl -s URL | sudo tee /etc/apt/URL.d/URL >/dev/null && echo \"deb URL buster main\" | sudo tee /etc/apt/URL.d/URL && sudo apt update && sudo apt install ngrok -y && sudo apt update && sudo apt upgrade -y && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && ngrok http --domain=URL 8060", "# Setup Local API\n\ngit clone URL && cd Face_Detection_Logic && pip install -r URL && sudo apt update && sudo apt upgrade -y && python face_logic_api.py\n\ncd Face_Detection_Logic && python face_logic_api.py", "# hypercorn face_logic_api:app --bind 127.0.0.1:8060 --workers 4" ]
[ "TAGS\n#doi-10.57967/hf/2132 #region-us \n", "# rakib72642/Face_Detection_Logic", "# HuggingFace: URL", "# Setup Global API\n\nsudo apt install iproute2 -y && sudo apt install wget -y && sudo apt install unzip -y && sudo apt install unzip -y && sudo apt install nvtop -y && sudo apt-get install git-all -y && sudo apt-get install git-lfs -y && sudo apt-get update && sudo apt-get install libgl1 -y && sudo apt install curl -y && curl -s URL | sudo tee /etc/apt/URL.d/URL >/dev/null && echo \"deb URL buster main\" | sudo tee /etc/apt/URL.d/URL && sudo apt update && sudo apt install ngrok -y && sudo apt update && sudo apt upgrade -y && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && ngrok http --domain=URL 8060", "# Setup Local API\n\ngit clone URL && cd Face_Detection_Logic && pip install -r URL && sudo apt update && sudo apt upgrade -y && python face_logic_api.py\n\ncd Face_Detection_Logic && python face_logic_api.py", "# hypercorn face_logic_api:app --bind 127.0.0.1:8060 --workers 4" ]
unconditional-image-generation
diffusers
# 此模型是用于生成蝴蝶图像的无条件生成扩散模型 by shoe '''python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('{hub_model_id}') image = pipeline().images[0] image
{"language": ["en"], "license": "mit", "library_name": "diffusers", "tags": ["code"], "datasets": ["huggan/smithsonian_butterflies_subset"], "metrics": ["code_eval"], "targs": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"], "pipeline_tag": "unconditional-image-generation"}
shao918516/sd-class-butterflies-32
null
[ "diffusers", "safetensors", "code", "unconditional-image-generation", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
2024-04-27T08:23:30+00:00
[]
[ "en" ]
TAGS #diffusers #safetensors #code #unconditional-image-generation #en #dataset-huggan/smithsonian_butterflies_subset #license-mit #diffusers-DDPMPipeline #region-us
# 此模型是用于生成蝴蝶图像的无条件生成扩散模型 by shoe '''python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('{hub_model_id}') image = pipeline().images[0] image
[ "# 此模型是用于生成蝴蝶图像的无条件生成扩散模型 by shoe\n\n'''python\nfrom diffusers import DDPMPipeline\npipeline = DDPMPipeline.from_pretrained('{hub_model_id}')\nimage = pipeline().images[0]\nimage" ]
[ "TAGS\n#diffusers #safetensors #code #unconditional-image-generation #en #dataset-huggan/smithsonian_butterflies_subset #license-mit #diffusers-DDPMPipeline #region-us \n", "# 此模型是用于生成蝴蝶图像的无条件生成扩散模型 by shoe\n\n'''python\nfrom diffusers import DDPMPipeline\npipeline = DDPMPipeline.from_pretrained('{hub_model_id}')\nimage = pipeline().images[0]\nimage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/hp_all_STEP0000010
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:24:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4634 - F1 Score: 0.8166 - Accuracy: 0.8169 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5099 | 1.1 | 200 | 0.4519 | 0.8033 | 0.8037 | | 0.4636 | 2.21 | 400 | 0.4405 | 0.8105 | 0.8110 | | 0.4518 | 3.31 | 600 | 0.4419 | 0.8049 | 0.8065 | | 0.4373 | 4.42 | 800 | 0.4418 | 0.8102 | 0.8117 | | 0.4337 | 5.52 | 1000 | 0.4376 | 0.8138 | 0.8152 | | 0.4279 | 6.63 | 1200 | 0.4592 | 0.8005 | 0.8031 | | 0.4252 | 7.73 | 1400 | 0.4413 | 0.8081 | 0.8100 | | 0.4167 | 8.84 | 1600 | 0.4679 | 0.7975 | 0.8006 | | 0.41 | 9.94 | 1800 | 0.4273 | 0.8192 | 0.8193 | | 0.4059 | 11.05 | 2000 | 0.4373 | 0.8149 | 0.8159 | | 0.4028 | 12.15 | 2200 | 0.4246 | 0.8204 | 0.8207 | | 0.3971 | 13.26 | 2400 | 0.4294 | 0.8210 | 0.8218 | | 0.391 | 14.36 | 2600 | 0.4299 | 0.8196 | 0.8200 | | 0.388 | 15.47 | 2800 | 0.4330 | 0.8146 | 0.8155 | | 0.3789 | 16.57 | 3000 | 0.4315 | 0.8227 | 0.8228 | | 0.3822 | 17.68 | 3200 | 0.4286 | 0.8219 | 0.8221 | | 0.3748 | 18.78 | 3400 | 0.4470 | 0.8101 | 0.8114 | | 0.3706 | 19.89 | 3600 | 0.4310 | 0.8177 | 0.8180 | | 0.368 | 20.99 | 3800 | 0.4439 | 0.8122 | 0.8131 | | 0.3637 | 22.1 | 4000 | 0.4379 | 0.8240 | 0.8242 | | 0.3591 | 23.2 | 4200 | 0.4493 | 0.8186 | 0.8187 | | 0.3566 | 24.31 | 4400 | 0.4525 | 0.8165 | 0.8173 | | 0.3433 | 25.41 | 4600 | 0.4524 | 0.8177 | 0.8176 | | 0.3503 | 26.52 | 4800 | 0.4487 | 0.8104 | 0.8107 | | 0.3478 | 27.62 | 5000 | 0.4511 | 0.8158 | 0.8166 | | 0.3382 | 28.73 | 5200 | 0.4522 | 0.8188 | 0.8193 | | 0.3346 | 29.83 | 5400 | 0.4523 | 0.8159 | 0.8159 | | 0.3348 | 30.94 | 5600 | 0.4558 | 0.8136 | 0.8138 | | 0.3331 | 32.04 | 5800 | 0.4547 | 0.8199 | 0.8197 | | 0.3271 | 33.15 | 6000 | 0.4593 | 0.8171 | 0.8176 | | 0.3243 | 34.25 | 6200 | 0.4761 | 0.8143 | 0.8141 | | 0.3249 | 35.36 | 6400 | 0.4719 | 0.8191 | 0.8193 | | 0.32 | 36.46 | 6600 | 0.4721 | 0.8120 | 0.8124 | | 0.3182 | 37.57 | 6800 | 0.4760 | 0.8133 | 0.8138 | | 0.3185 | 38.67 | 7000 | 0.4761 | 0.8130 | 0.8138 | | 0.3098 | 39.78 | 7200 | 0.4743 | 0.8165 | 0.8169 | | 0.3144 | 40.88 | 7400 | 0.4666 | 0.8208 | 0.8207 | | 0.3077 | 41.99 | 7600 | 0.4682 | 0.8180 | 0.8183 | | 0.3043 | 43.09 | 7800 | 0.4861 | 0.8186 | 0.8187 | | 0.3072 | 44.2 | 8000 | 0.4798 | 0.8131 | 0.8135 | | 0.3004 | 45.3 | 8200 | 0.4849 | 0.8206 | 0.8204 | | 0.298 | 46.41 | 8400 | 0.4854 | 0.8203 | 0.8204 | | 0.304 | 47.51 | 8600 | 0.4814 | 0.8145 | 0.8148 | | 0.2999 | 48.62 | 8800 | 0.4790 | 0.8192 | 0.8193 | | 0.2968 | 49.72 | 9000 | 0.4830 | 0.8168 | 0.8169 | | 0.2992 | 50.83 | 9200 | 0.4836 | 0.8152 | 0.8155 | | 0.2939 | 51.93 | 9400 | 0.4873 | 0.8178 | 0.8180 | | 0.2944 | 53.04 | 9600 | 0.4890 | 0.8182 | 0.8183 | | 0.2921 | 54.14 | 9800 | 0.4893 | 0.8179 | 0.8180 | | 0.2906 | 55.25 | 10000 | 0.4908 | 0.8168 | 0.8169 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:25:49+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_EMP\_H3K79me3-seqsight\_16384\_512\_22M-L32\_f =================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4634 * F1 Score: 0.8166 * Accuracy: 0.8169 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/CalderaAI/Hexoteric-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hexoteric-7B-GGUF/resolve/main/Hexoteric-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mistral", "mix"], "base_model": "CalderaAI/Hexoteric-7B", "quantized_by": "mradermacher"}
mradermacher/Hexoteric-7B-GGUF
null
[ "transformers", "gguf", "mistral", "mix", "en", "base_model:CalderaAI/Hexoteric-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:26:05+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #mix #en #base_model-CalderaAI/Hexoteric-7B #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mistral #mix #en #base_model-CalderaAI/Hexoteric-7B #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me1-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5405 - F1 Score: 0.7399 - Accuracy: 0.7437 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6381 | 1.01 | 200 | 0.6189 | 0.6661 | 0.6752 | | 0.6106 | 2.02 | 400 | 0.6046 | 0.6878 | 0.6897 | | 0.6012 | 3.03 | 600 | 0.5978 | 0.6996 | 0.7027 | | 0.5933 | 4.04 | 800 | 0.5925 | 0.7045 | 0.7080 | | 0.5842 | 5.05 | 1000 | 0.5821 | 0.7146 | 0.7178 | | 0.5794 | 6.06 | 1200 | 0.5894 | 0.7111 | 0.7181 | | 0.5719 | 7.07 | 1400 | 0.5781 | 0.7237 | 0.7285 | | 0.5715 | 8.08 | 1600 | 0.5681 | 0.7269 | 0.7314 | | 0.5667 | 9.09 | 1800 | 0.5646 | 0.7337 | 0.7377 | | 0.5624 | 10.1 | 2000 | 0.5623 | 0.7343 | 0.7377 | | 0.5616 | 11.11 | 2200 | 0.5656 | 0.7302 | 0.7342 | | 0.5593 | 12.12 | 2400 | 0.5610 | 0.7332 | 0.7364 | | 0.5563 | 13.13 | 2600 | 0.5629 | 0.7275 | 0.7320 | | 0.5569 | 14.14 | 2800 | 0.5600 | 0.7324 | 0.7364 | | 0.555 | 15.15 | 3000 | 0.5620 | 0.7265 | 0.7311 | | 0.5531 | 16.16 | 3200 | 0.5646 | 0.7198 | 0.7260 | | 0.5511 | 17.17 | 3400 | 0.5650 | 0.7256 | 0.7307 | | 0.5528 | 18.18 | 3600 | 0.5525 | 0.7343 | 0.7374 | | 0.5484 | 19.19 | 3800 | 0.5605 | 0.7282 | 0.7330 | | 0.5496 | 20.2 | 4000 | 0.5558 | 0.7309 | 0.7352 | | 0.5478 | 21.21 | 4200 | 0.5668 | 0.7238 | 0.7301 | | 0.5465 | 22.22 | 4400 | 0.5567 | 0.7318 | 0.7361 | | 0.5431 | 23.23 | 4600 | 0.5651 | 0.7238 | 0.7298 | | 0.5505 | 24.24 | 4800 | 0.5573 | 0.7248 | 0.7304 | | 0.5434 | 25.25 | 5000 | 0.5510 | 0.7359 | 0.7393 | | 0.5476 | 26.26 | 5200 | 0.5502 | 0.7373 | 0.7405 | | 0.5429 | 27.27 | 5400 | 0.5528 | 0.7314 | 0.7361 | | 0.543 | 28.28 | 5600 | 0.5496 | 0.7358 | 0.7386 | | 0.547 | 29.29 | 5800 | 0.5535 | 0.7291 | 0.7342 | | 0.5402 | 30.3 | 6000 | 0.5504 | 0.7361 | 0.7399 | | 0.542 | 31.31 | 6200 | 0.5515 | 0.7322 | 0.7364 | | 0.5414 | 32.32 | 6400 | 0.5574 | 0.7281 | 0.7339 | | 0.5399 | 33.33 | 6600 | 0.5535 | 0.7290 | 0.7342 | | 0.5413 | 34.34 | 6800 | 0.5585 | 0.7220 | 0.7289 | | 0.5378 | 35.35 | 7000 | 0.5489 | 0.7379 | 0.7415 | | 0.5416 | 36.36 | 7200 | 0.5505 | 0.7319 | 0.7364 | | 0.5373 | 37.37 | 7400 | 0.5480 | 0.7345 | 0.7383 | | 0.5392 | 38.38 | 7600 | 0.5475 | 0.7374 | 0.7405 | | 0.5359 | 39.39 | 7800 | 0.5491 | 0.7319 | 0.7358 | | 0.5411 | 40.4 | 8000 | 0.5504 | 0.7343 | 0.7383 | | 0.5372 | 41.41 | 8200 | 0.5522 | 0.7292 | 0.7342 | | 0.5424 | 42.42 | 8400 | 0.5519 | 0.7291 | 0.7342 | | 0.5362 | 43.43 | 8600 | 0.5503 | 0.7333 | 0.7377 | | 0.5357 | 44.44 | 8800 | 0.5500 | 0.7328 | 0.7371 | | 0.5369 | 45.45 | 9000 | 0.5494 | 0.7335 | 0.7377 | | 0.539 | 46.46 | 9200 | 0.5514 | 0.7302 | 0.7352 | | 0.5399 | 47.47 | 9400 | 0.5502 | 0.7308 | 0.7355 | | 0.5316 | 48.48 | 9600 | 0.5508 | 0.7322 | 0.7367 | | 0.5426 | 49.49 | 9800 | 0.5503 | 0.7318 | 0.7364 | | 0.537 | 50.51 | 10000 | 0.5497 | 0.7323 | 0.7367 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:26:26+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_EMP\_H3K4me1-seqsight\_16384\_512\_22M-L1\_f ================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset. It achieves the following results on the evaluation set: * Loss: 0.5405 * F1 Score: 0.7399 * Accuracy: 0.7437 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me1-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5333 - F1 Score: 0.7476 - Accuracy: 0.7509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6278 | 1.01 | 200 | 0.6081 | 0.6794 | 0.6866 | | 0.5931 | 2.02 | 400 | 0.5839 | 0.7145 | 0.7181 | | 0.5761 | 3.03 | 600 | 0.5719 | 0.7221 | 0.7270 | | 0.5664 | 4.04 | 800 | 0.5667 | 0.7235 | 0.7276 | | 0.5586 | 5.05 | 1000 | 0.5551 | 0.7350 | 0.7371 | | 0.5553 | 6.06 | 1200 | 0.5678 | 0.7166 | 0.7225 | | 0.5494 | 7.07 | 1400 | 0.5578 | 0.7283 | 0.7317 | | 0.548 | 8.08 | 1600 | 0.5539 | 0.7305 | 0.7342 | | 0.5446 | 9.09 | 1800 | 0.5495 | 0.7315 | 0.7352 | | 0.5408 | 10.1 | 2000 | 0.5541 | 0.7314 | 0.7358 | | 0.5394 | 11.11 | 2200 | 0.5517 | 0.7322 | 0.7364 | | 0.5365 | 12.12 | 2400 | 0.5466 | 0.7371 | 0.7399 | | 0.5315 | 13.13 | 2600 | 0.5561 | 0.7302 | 0.7355 | | 0.533 | 14.14 | 2800 | 0.5492 | 0.7351 | 0.7383 | | 0.5291 | 15.15 | 3000 | 0.5508 | 0.7356 | 0.7396 | | 0.5282 | 16.16 | 3200 | 0.5492 | 0.7343 | 0.7386 | | 0.5249 | 17.17 | 3400 | 0.5514 | 0.7374 | 0.7415 | | 0.5265 | 18.18 | 3600 | 0.5415 | 0.7389 | 0.7415 | | 0.521 | 19.19 | 3800 | 0.5439 | 0.7358 | 0.7396 | | 0.5218 | 20.2 | 4000 | 0.5461 | 0.7395 | 0.7424 | | 0.5173 | 21.21 | 4200 | 0.5543 | 0.7290 | 0.7348 | | 0.5172 | 22.22 | 4400 | 0.5488 | 0.7361 | 0.7396 | | 0.5143 | 23.23 | 4600 | 0.5537 | 0.7303 | 0.7355 | | 0.5198 | 24.24 | 4800 | 0.5483 | 0.7324 | 0.7380 | | 0.512 | 25.25 | 5000 | 0.5409 | 0.7368 | 0.7390 | | 0.5148 | 26.26 | 5200 | 0.5405 | 0.7380 | 0.7396 | | 0.5129 | 27.27 | 5400 | 0.5465 | 0.7285 | 0.7330 | | 0.5096 | 28.28 | 5600 | 0.5412 | 0.7352 | 0.7377 | | 0.5147 | 29.29 | 5800 | 0.5524 | 0.7287 | 0.7345 | | 0.5071 | 30.3 | 6000 | 0.5454 | 0.7356 | 0.7393 | | 0.5087 | 31.31 | 6200 | 0.5440 | 0.7339 | 0.7367 | | 0.5068 | 32.32 | 6400 | 0.5486 | 0.7282 | 0.7333 | | 0.5051 | 33.33 | 6600 | 0.5526 | 0.7271 | 0.7326 | | 0.5061 | 34.34 | 6800 | 0.5490 | 0.7262 | 0.7314 | | 0.4993 | 35.35 | 7000 | 0.5453 | 0.7381 | 0.7408 | | 0.5076 | 36.36 | 7200 | 0.5453 | 0.7292 | 0.7336 | | 0.5001 | 37.37 | 7400 | 0.5444 | 0.7315 | 0.7348 | | 0.5024 | 38.38 | 7600 | 0.5420 | 0.7365 | 0.7390 | | 0.4987 | 39.39 | 7800 | 0.5439 | 0.7391 | 0.7418 | | 0.502 | 40.4 | 8000 | 0.5472 | 0.7380 | 0.7408 | | 0.498 | 41.41 | 8200 | 0.5513 | 0.7236 | 0.7289 | | 0.5036 | 42.42 | 8400 | 0.5491 | 0.7305 | 0.7352 | | 0.4979 | 43.43 | 8600 | 0.5467 | 0.7357 | 0.7390 | | 0.4963 | 44.44 | 8800 | 0.5480 | 0.7315 | 0.7355 | | 0.4974 | 45.45 | 9000 | 0.5465 | 0.7334 | 0.7367 | | 0.4992 | 46.46 | 9200 | 0.5498 | 0.7311 | 0.7355 | | 0.4994 | 47.47 | 9400 | 0.5490 | 0.7310 | 0.7355 | | 0.4892 | 48.48 | 9600 | 0.5508 | 0.7316 | 0.7358 | | 0.5014 | 49.49 | 9800 | 0.5486 | 0.7318 | 0.7361 | | 0.4966 | 50.51 | 10000 | 0.5479 | 0.7315 | 0.7355 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:27:15+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_EMP\_H3K4me1-seqsight\_16384\_512\_22M-L8\_f ================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset. It achieves the following results on the evaluation set: * Loss: 0.5333 * F1 Score: 0.7476 * Accuracy: 0.7509 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me1-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5413 - F1 Score: 0.7419 - Accuracy: 0.7437 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6211 | 1.01 | 200 | 0.6003 | 0.6909 | 0.7001 | | 0.5777 | 2.02 | 400 | 0.5707 | 0.7214 | 0.7241 | | 0.5616 | 3.03 | 600 | 0.5586 | 0.7297 | 0.7336 | | 0.5518 | 4.04 | 800 | 0.5591 | 0.7215 | 0.7263 | | 0.5453 | 5.05 | 1000 | 0.5440 | 0.7442 | 0.7456 | | 0.5395 | 6.06 | 1200 | 0.5573 | 0.7265 | 0.7326 | | 0.5326 | 7.07 | 1400 | 0.5492 | 0.7390 | 0.7418 | | 0.5302 | 8.08 | 1600 | 0.5441 | 0.7378 | 0.7412 | | 0.5253 | 9.09 | 1800 | 0.5444 | 0.7316 | 0.7355 | | 0.5189 | 10.1 | 2000 | 0.5509 | 0.7258 | 0.7317 | | 0.5152 | 11.11 | 2200 | 0.5510 | 0.7322 | 0.7380 | | 0.512 | 12.12 | 2400 | 0.5430 | 0.7357 | 0.7390 | | 0.5049 | 13.13 | 2600 | 0.5595 | 0.7161 | 0.7247 | | 0.5027 | 14.14 | 2800 | 0.5460 | 0.7325 | 0.7358 | | 0.4972 | 15.15 | 3000 | 0.5504 | 0.7338 | 0.7380 | | 0.4937 | 16.16 | 3200 | 0.5495 | 0.7337 | 0.7374 | | 0.4902 | 17.17 | 3400 | 0.5576 | 0.7385 | 0.7408 | | 0.4887 | 18.18 | 3600 | 0.5471 | 0.7336 | 0.7361 | | 0.4833 | 19.19 | 3800 | 0.5491 | 0.7374 | 0.7402 | | 0.4801 | 20.2 | 4000 | 0.5542 | 0.7379 | 0.7402 | | 0.4742 | 21.21 | 4200 | 0.5480 | 0.7388 | 0.7418 | | 0.4734 | 22.22 | 4400 | 0.5575 | 0.7331 | 0.7358 | | 0.4652 | 23.23 | 4600 | 0.5643 | 0.7342 | 0.7364 | | 0.4675 | 24.24 | 4800 | 0.5670 | 0.7277 | 0.7330 | | 0.4604 | 25.25 | 5000 | 0.5572 | 0.7381 | 0.7399 | | 0.4582 | 26.26 | 5200 | 0.5655 | 0.7327 | 0.7348 | | 0.456 | 27.27 | 5400 | 0.5675 | 0.7279 | 0.7326 | | 0.4525 | 28.28 | 5600 | 0.5664 | 0.7269 | 0.7314 | | 0.4556 | 29.29 | 5800 | 0.5766 | 0.7308 | 0.7367 | | 0.4418 | 30.3 | 6000 | 0.5733 | 0.7335 | 0.7355 | | 0.4454 | 31.31 | 6200 | 0.5707 | 0.7294 | 0.7314 | | 0.4414 | 32.32 | 6400 | 0.5699 | 0.7321 | 0.7355 | | 0.4412 | 33.33 | 6600 | 0.5780 | 0.7311 | 0.7345 | | 0.4375 | 34.34 | 6800 | 0.5752 | 0.7304 | 0.7326 | | 0.4284 | 35.35 | 7000 | 0.5858 | 0.7333 | 0.7355 | | 0.4378 | 36.36 | 7200 | 0.5758 | 0.7282 | 0.7317 | | 0.4302 | 37.37 | 7400 | 0.5790 | 0.7294 | 0.7323 | | 0.4303 | 38.38 | 7600 | 0.5758 | 0.7302 | 0.7320 | | 0.4258 | 39.39 | 7800 | 0.5825 | 0.7277 | 0.7301 | | 0.4275 | 40.4 | 8000 | 0.5895 | 0.7306 | 0.7333 | | 0.421 | 41.41 | 8200 | 0.5953 | 0.7309 | 0.7348 | | 0.4243 | 42.42 | 8400 | 0.5904 | 0.7298 | 0.7323 | | 0.4218 | 43.43 | 8600 | 0.5867 | 0.7304 | 0.7326 | | 0.4195 | 44.44 | 8800 | 0.5907 | 0.7304 | 0.7333 | | 0.4175 | 45.45 | 9000 | 0.5904 | 0.7318 | 0.7339 | | 0.4217 | 46.46 | 9200 | 0.5951 | 0.7322 | 0.7352 | | 0.4219 | 47.47 | 9400 | 0.5904 | 0.7296 | 0.7330 | | 0.4084 | 48.48 | 9600 | 0.5943 | 0.7308 | 0.7339 | | 0.4195 | 49.49 | 9800 | 0.5912 | 0.7298 | 0.7330 | | 0.4142 | 50.51 | 10000 | 0.5917 | 0.7299 | 0.7326 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:27:30+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_EMP\_H3K4me1-seqsight\_16384\_512\_22M-L32\_f ================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset. It achieves the following results on the evaluation set: * Loss: 0.5413 * F1 Score: 0.7419 * Accuracy: 0.7437 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5035 - F1 Score: 0.7724 - Accuracy: 0.7761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5905 | 0.92 | 200 | 0.5589 | 0.7164 | 0.7208 | | 0.5431 | 1.83 | 400 | 0.5466 | 0.7334 | 0.7362 | | 0.5331 | 2.75 | 600 | 0.5371 | 0.7384 | 0.7408 | | 0.5302 | 3.67 | 800 | 0.5267 | 0.7491 | 0.7506 | | 0.5133 | 4.59 | 1000 | 0.5352 | 0.7428 | 0.7468 | | 0.5112 | 5.5 | 1200 | 0.5236 | 0.7461 | 0.7494 | | 0.5097 | 6.42 | 1400 | 0.5219 | 0.7503 | 0.7526 | | 0.5056 | 7.34 | 1600 | 0.5315 | 0.7454 | 0.75 | | 0.4991 | 8.26 | 1800 | 0.5299 | 0.7459 | 0.7506 | | 0.5024 | 9.17 | 2000 | 0.5331 | 0.7417 | 0.7471 | | 0.4981 | 10.09 | 2200 | 0.5397 | 0.7394 | 0.7466 | | 0.4962 | 11.01 | 2400 | 0.5303 | 0.7482 | 0.7540 | | 0.4941 | 11.93 | 2600 | 0.5250 | 0.7491 | 0.7543 | | 0.4946 | 12.84 | 2800 | 0.5226 | 0.7485 | 0.7537 | | 0.4914 | 13.76 | 3000 | 0.5363 | 0.7432 | 0.75 | | 0.4878 | 14.68 | 3200 | 0.5182 | 0.7573 | 0.7606 | | 0.4934 | 15.6 | 3400 | 0.5317 | 0.7414 | 0.7483 | | 0.4878 | 16.51 | 3600 | 0.5212 | 0.7529 | 0.7575 | | 0.489 | 17.43 | 3800 | 0.5072 | 0.7597 | 0.7626 | | 0.4866 | 18.35 | 4000 | 0.5174 | 0.7550 | 0.7597 | | 0.4862 | 19.27 | 4200 | 0.5283 | 0.7516 | 0.7572 | | 0.486 | 20.18 | 4400 | 0.5300 | 0.7429 | 0.7497 | | 0.4819 | 21.1 | 4600 | 0.5271 | 0.7486 | 0.7543 | | 0.4842 | 22.02 | 4800 | 0.5275 | 0.7530 | 0.7583 | | 0.4849 | 22.94 | 5000 | 0.5218 | 0.7502 | 0.7557 | | 0.4824 | 23.85 | 5200 | 0.5267 | 0.7445 | 0.7509 | | 0.4804 | 24.77 | 5400 | 0.5341 | 0.7400 | 0.7474 | | 0.4804 | 25.69 | 5600 | 0.5254 | 0.7503 | 0.7560 | | 0.4806 | 26.61 | 5800 | 0.5150 | 0.7589 | 0.7635 | | 0.4777 | 27.52 | 6000 | 0.5425 | 0.7392 | 0.7468 | | 0.4824 | 28.44 | 6200 | 0.5234 | 0.7466 | 0.7526 | | 0.4784 | 29.36 | 6400 | 0.5303 | 0.7438 | 0.7509 | | 0.4791 | 30.28 | 6600 | 0.5282 | 0.7454 | 0.7520 | | 0.4787 | 31.19 | 6800 | 0.5133 | 0.7611 | 0.7655 | | 0.4795 | 32.11 | 7000 | 0.5269 | 0.7451 | 0.7517 | | 0.4783 | 33.03 | 7200 | 0.5215 | 0.7538 | 0.7592 | | 0.4764 | 33.94 | 7400 | 0.5271 | 0.7502 | 0.7557 | | 0.4759 | 34.86 | 7600 | 0.5259 | 0.7484 | 0.7546 | | 0.4777 | 35.78 | 7800 | 0.5215 | 0.7497 | 0.7554 | | 0.4782 | 36.7 | 8000 | 0.5218 | 0.7526 | 0.7583 | | 0.4769 | 37.61 | 8200 | 0.5302 | 0.7408 | 0.7483 | | 0.4781 | 38.53 | 8400 | 0.5170 | 0.7548 | 0.7600 | | 0.4748 | 39.45 | 8600 | 0.5247 | 0.7489 | 0.7549 | | 0.4702 | 40.37 | 8800 | 0.5320 | 0.7461 | 0.7526 | | 0.4781 | 41.28 | 9000 | 0.5234 | 0.7459 | 0.7523 | | 0.473 | 42.2 | 9200 | 0.5181 | 0.7566 | 0.7618 | | 0.4723 | 43.12 | 9400 | 0.5212 | 0.7522 | 0.7577 | | 0.4761 | 44.04 | 9600 | 0.5247 | 0.7467 | 0.7532 | | 0.4743 | 44.95 | 9800 | 0.5252 | 0.7462 | 0.7526 | | 0.4746 | 45.87 | 10000 | 0.5232 | 0.7491 | 0.7552 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:28:26+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_EMP\_H3K36me3-seqsight\_16384\_512\_22M-L1\_f ================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5035 * F1 Score: 0.7724 * Accuracy: 0.7761 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0_4iters_bs256_nodpo_only4w_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.0_4iters_bs256_nodpo_only4w_iter_1", "results": []}]}
ShenaoZhang/0.0_4iters_bs256_nodpo_only4w_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:29:03+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0_4iters_bs256_nodpo_only4w_iter_1 This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.0_4iters_bs256_nodpo_only4w_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0_4iters_bs256_nodpo_only4w_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4898 - F1 Score: 0.7785 - Accuracy: 0.7827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5757 | 0.92 | 200 | 0.5494 | 0.7255 | 0.7296 | | 0.5261 | 1.83 | 400 | 0.5399 | 0.7406 | 0.7448 | | 0.5117 | 2.75 | 600 | 0.5298 | 0.7457 | 0.7494 | | 0.5114 | 3.67 | 800 | 0.5242 | 0.7502 | 0.7540 | | 0.4948 | 4.59 | 1000 | 0.5305 | 0.7490 | 0.7540 | | 0.4926 | 5.5 | 1200 | 0.5200 | 0.7511 | 0.7566 | | 0.4906 | 6.42 | 1400 | 0.5080 | 0.7641 | 0.7669 | | 0.485 | 7.34 | 1600 | 0.5249 | 0.7484 | 0.7543 | | 0.48 | 8.26 | 1800 | 0.5154 | 0.7576 | 0.7620 | | 0.4809 | 9.17 | 2000 | 0.5099 | 0.7544 | 0.7589 | | 0.477 | 10.09 | 2200 | 0.5382 | 0.7333 | 0.7425 | | 0.4743 | 11.01 | 2400 | 0.5056 | 0.7562 | 0.7609 | | 0.4714 | 11.93 | 2600 | 0.4906 | 0.7706 | 0.7729 | | 0.4717 | 12.84 | 2800 | 0.5049 | 0.7542 | 0.7595 | | 0.4676 | 13.76 | 3000 | 0.5215 | 0.7487 | 0.7557 | | 0.4633 | 14.68 | 3200 | 0.4999 | 0.7685 | 0.7715 | | 0.4682 | 15.6 | 3400 | 0.4994 | 0.7591 | 0.7635 | | 0.464 | 16.51 | 3600 | 0.5080 | 0.7593 | 0.7643 | | 0.4643 | 17.43 | 3800 | 0.4974 | 0.7643 | 0.7683 | | 0.4601 | 18.35 | 4000 | 0.4985 | 0.7612 | 0.7658 | | 0.4616 | 19.27 | 4200 | 0.5079 | 0.7613 | 0.7661 | | 0.4586 | 20.18 | 4400 | 0.5152 | 0.7524 | 0.7592 | | 0.4554 | 21.1 | 4600 | 0.5067 | 0.7648 | 0.7695 | | 0.4567 | 22.02 | 4800 | 0.5062 | 0.7686 | 0.7726 | | 0.4568 | 22.94 | 5000 | 0.4975 | 0.7656 | 0.7701 | | 0.4523 | 23.85 | 5200 | 0.5032 | 0.7600 | 0.7655 | | 0.4489 | 24.77 | 5400 | 0.5222 | 0.7484 | 0.7560 | | 0.4504 | 25.69 | 5600 | 0.5140 | 0.7604 | 0.7663 | | 0.4515 | 26.61 | 5800 | 0.4921 | 0.7722 | 0.7758 | | 0.4466 | 27.52 | 6000 | 0.5274 | 0.7522 | 0.7592 | | 0.4511 | 28.44 | 6200 | 0.5068 | 0.7582 | 0.7640 | | 0.447 | 29.36 | 6400 | 0.5077 | 0.7577 | 0.7638 | | 0.4452 | 30.28 | 6600 | 0.5119 | 0.7555 | 0.7620 | | 0.4479 | 31.19 | 6800 | 0.4942 | 0.7733 | 0.7772 | | 0.4481 | 32.11 | 7000 | 0.5034 | 0.7645 | 0.7698 | | 0.4456 | 33.03 | 7200 | 0.5054 | 0.7631 | 0.7683 | | 0.4438 | 33.94 | 7400 | 0.5077 | 0.7647 | 0.7698 | | 0.4428 | 34.86 | 7600 | 0.5039 | 0.7670 | 0.7718 | | 0.4426 | 35.78 | 7800 | 0.4994 | 0.7691 | 0.7732 | | 0.4448 | 36.7 | 8000 | 0.5012 | 0.7609 | 0.7663 | | 0.4424 | 37.61 | 8200 | 0.5070 | 0.7553 | 0.7618 | | 0.4449 | 38.53 | 8400 | 0.5003 | 0.7651 | 0.7701 | | 0.4377 | 39.45 | 8600 | 0.5074 | 0.7601 | 0.7658 | | 0.4355 | 40.37 | 8800 | 0.5137 | 0.7578 | 0.7640 | | 0.4417 | 41.28 | 9000 | 0.5040 | 0.7617 | 0.7672 | | 0.4369 | 42.2 | 9200 | 0.4990 | 0.7678 | 0.7724 | | 0.4358 | 43.12 | 9400 | 0.5010 | 0.7676 | 0.7721 | | 0.4413 | 44.04 | 9600 | 0.5052 | 0.7596 | 0.7655 | | 0.4374 | 44.95 | 9800 | 0.5068 | 0.7586 | 0.7646 | | 0.4383 | 45.87 | 10000 | 0.5048 | 0.7613 | 0.7669 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:29:32+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_EMP\_H3K36me3-seqsight\_16384\_512\_22M-L8\_f ================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4898 * F1 Score: 0.7785 * Accuracy: 0.7827 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
# 🔎Taiwan-inquiry_7B_v2.1.gguf - Model creator: [Joseph (Chen-Wei) Li](https://www.linkedin.com/in/joseph-li-3a453b231/) - Original model: [Taiwan-inquiry_7B_2.1](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1) | Name | Quant method | Bits | Size | Use case | | ---- | :----: | :----: | :----: | ----- | | [Taiwan-inquiry_7B_v2.1-Q4_K_M.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q4_K_M.gguf) | Q4_K_M | 4 | 4.54 GB | medium, balanced quality - recommended | | [Taiwan-inquiry_7B_v2.1-Q5_K_M.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q5_K_M.gguf) | Q5_K_M | 5 | 5.32 GB | large, very low quality loss - recommended | | [Taiwan-inquiry_7B_v2.1-Q6_K.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q6_K.gguf)| Q6_K | 6 | 6.14 GB| very large, extremely low quality loss | | [Taiwan-inquiry_7B_v2.1-Q8_0.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q8_0.gguf) | Q8_0 | 8 | 7.96 GB | very large, extremely low quality loss - not recommended | | [Taiwan-inquiry_7B_v2.1.gguf ](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1.gguf) | No quantization | 16 or 32 | 15 GB | very large, no quality loss - not recommended | ## Usage of the model - The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient. - You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt. **(using my patient generator: [**colab**](https://colab.research.google.com/drive/17MSob_tQ2hPtMBL0xOF2zzV6WWe4dEG6?usp=sharing))** - Directly asking the certain disease about the symptoms and the possible therapies.**(Warning: It's not medical advice!)** ## Reference - [llama.cpp](https://github.com/ggerganov/llama.cpp) - [LM studio](https://lmstudio.ai/) - [將 HuggingFace 模型轉換為 GGUF 及使用 llama.cpp 進行量化--以INX-TEXT/Bailong-instruct-7B 為例](https://medium.com/@NeroHin/%E5%B0%87-huggingface-%E6%A0%BC%E5%BC%8F%E6%A8%A1%E5%BC%8F%E8%BD%89%E6%8F%9B%E7%82%BA-gguf-%E4%BB%A5inx-text-bailong-instruct-7b-%E7%82%BA%E4%BE%8B-a2cfdd892cbc) - [[LM Studio]執行語言模型的最好程式介面 無需特別設定便可以使用語言模型|方便管理與使用多種模型 可快速架設與OpenAI相容的伺服器](https://the-walking-fish.com/p/lmstudio/#google_vignette) - [[Day 15] - 鋼鐵草泥馬 🦙 LLM chatbot 🤖 (6/10)|GGML 量化 LLaMa](https://ithelp.ithome.com.tw/articles/10331431)
{"license": "apache-2.0"}
ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-04-27T08:32:40+00:00
[]
[]
TAGS #gguf #license-apache-2.0 #region-us
Taiwan-inquiry\_7B\_v2.1.gguf ============================= * Model creator: Joseph (Chen-Wei) Li * Original model: Taiwan-inquiry\_7B\_2.1 Usage of the model ------------------ * The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient. * You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt. (using my patient generator: colab) * Directly asking the certain disease about the symptoms and the possible therapies.(Warning: It's not medical advice!) Reference --------- * URL * LM studio * 將 HuggingFace 模型轉換為 GGUF 及使用 URL 進行量化--以INX-TEXT/Bailong-instruct-7B 為例 * [[LM Studio]執行語言模型的最好程式介面 無需特別設定便可以使用語言模型|方便管理與使用多種模型 可快速架設與OpenAI相容的伺服器](URL * [[Day 15] - 鋼鐵草泥馬 LLM chatbot (6/10)|GGML 量化 LLaMa](URL
[]
[ "TAGS\n#gguf #license-apache-2.0 #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/hp_all_STEP0000020
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:33:19+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4789 - F1 Score: 0.7828 - Accuracy: 0.7844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.565 | 0.92 | 200 | 0.5575 | 0.7277 | 0.7342 | | 0.5147 | 1.83 | 400 | 0.5250 | 0.7498 | 0.7534 | | 0.5006 | 2.75 | 600 | 0.5174 | 0.7544 | 0.7580 | | 0.4986 | 3.67 | 800 | 0.5092 | 0.7533 | 0.7575 | | 0.4812 | 4.59 | 1000 | 0.5158 | 0.7562 | 0.7609 | | 0.4781 | 5.5 | 1200 | 0.5051 | 0.7531 | 0.7583 | | 0.4748 | 6.42 | 1400 | 0.4950 | 0.7704 | 0.7729 | | 0.4678 | 7.34 | 1600 | 0.5119 | 0.7500 | 0.7554 | | 0.4617 | 8.26 | 1800 | 0.4962 | 0.7644 | 0.7681 | | 0.4627 | 9.17 | 2000 | 0.4943 | 0.7655 | 0.7692 | | 0.456 | 10.09 | 2200 | 0.5253 | 0.7424 | 0.7509 | | 0.4507 | 11.01 | 2400 | 0.4910 | 0.7698 | 0.7735 | | 0.4449 | 11.93 | 2600 | 0.4763 | 0.7774 | 0.7787 | | 0.4447 | 12.84 | 2800 | 0.4982 | 0.7610 | 0.7663 | | 0.4366 | 13.76 | 3000 | 0.5181 | 0.7558 | 0.7626 | | 0.4329 | 14.68 | 3200 | 0.4990 | 0.7720 | 0.7752 | | 0.433 | 15.6 | 3400 | 0.4969 | 0.7643 | 0.7686 | | 0.4288 | 16.51 | 3600 | 0.5208 | 0.7608 | 0.7666 | | 0.4239 | 17.43 | 3800 | 0.5126 | 0.7642 | 0.7695 | | 0.4205 | 18.35 | 4000 | 0.5006 | 0.7692 | 0.7732 | | 0.4188 | 19.27 | 4200 | 0.5348 | 0.7654 | 0.7709 | | 0.4152 | 20.18 | 4400 | 0.5283 | 0.7621 | 0.7675 | | 0.4114 | 21.1 | 4600 | 0.5156 | 0.7666 | 0.7712 | | 0.4103 | 22.02 | 4800 | 0.5106 | 0.7720 | 0.7755 | | 0.4068 | 22.94 | 5000 | 0.5024 | 0.7749 | 0.7784 | | 0.4019 | 23.85 | 5200 | 0.5140 | 0.7636 | 0.7689 | | 0.3948 | 24.77 | 5400 | 0.5513 | 0.7598 | 0.7666 | | 0.3955 | 25.69 | 5600 | 0.5268 | 0.7608 | 0.7666 | | 0.394 | 26.61 | 5800 | 0.5128 | 0.7685 | 0.7721 | | 0.3867 | 27.52 | 6000 | 0.5573 | 0.7582 | 0.7643 | | 0.3908 | 28.44 | 6200 | 0.5237 | 0.7604 | 0.7658 | | 0.3881 | 29.36 | 6400 | 0.5347 | 0.7571 | 0.7629 | | 0.3834 | 30.28 | 6600 | 0.5306 | 0.7556 | 0.7618 | | 0.3844 | 31.19 | 6800 | 0.5182 | 0.7665 | 0.7706 | | 0.3808 | 32.11 | 7000 | 0.5227 | 0.7703 | 0.7741 | | 0.3783 | 33.03 | 7200 | 0.5322 | 0.7657 | 0.7704 | | 0.3767 | 33.94 | 7400 | 0.5268 | 0.7620 | 0.7672 | | 0.3732 | 34.86 | 7600 | 0.5305 | 0.7647 | 0.7692 | | 0.3734 | 35.78 | 7800 | 0.5269 | 0.7666 | 0.7704 | | 0.3742 | 36.7 | 8000 | 0.5305 | 0.7595 | 0.7652 | | 0.3717 | 37.61 | 8200 | 0.5435 | 0.7524 | 0.7592 | | 0.3742 | 38.53 | 8400 | 0.5373 | 0.7608 | 0.7663 | | 0.3657 | 39.45 | 8600 | 0.5339 | 0.7605 | 0.7655 | | 0.3644 | 40.37 | 8800 | 0.5444 | 0.7581 | 0.7638 | | 0.3668 | 41.28 | 9000 | 0.5327 | 0.7600 | 0.7652 | | 0.3607 | 42.2 | 9200 | 0.5356 | 0.7619 | 0.7669 | | 0.3606 | 43.12 | 9400 | 0.5380 | 0.7666 | 0.7709 | | 0.366 | 44.04 | 9600 | 0.5392 | 0.7585 | 0.7643 | | 0.3617 | 44.95 | 9800 | 0.5429 | 0.7571 | 0.7629 | | 0.3628 | 45.87 | 10000 | 0.5405 | 0.7598 | 0.7652 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:34:15+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_EMP\_H3K36me3-seqsight\_16384\_512\_22M-L32\_f =================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4789 * F1 Score: 0.7828 * Accuracy: 0.7844 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # radiopaedia_cl-llama3_8b-240426 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "radiopaedia_cl-llama3_8b-240426", "results": []}]}
Seoulsky/radiopaedia_cl-llama3_8b-240426
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "region:us" ]
null
2024-04-27T08:34:36+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us
# radiopaedia_cl-llama3_8b-240426 This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# radiopaedia_cl-llama3_8b-240426\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 10\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us \n", "# radiopaedia_cl-llama3_8b-240426\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 10\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HC-85/distilbert-lora-arxiv-multilabel
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:34:56+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TitleOS/EinsteinBagel-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/EinsteinBagel-8B-GGUF/resolve/main/EinsteinBagel-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "TitleOS/EinsteinBagel-8B", "quantized_by": "mradermacher"}
mradermacher/EinsteinBagel-8B-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:TitleOS/EinsteinBagel-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:35:27+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-TitleOS/EinsteinBagel-8B #license-llama3 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-TitleOS/EinsteinBagel-8B #license-llama3 #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_total_Instruction0_SOPAL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_total_Instruction0_SOPAL_v1", "results": []}]}
ThuyNT/CS505_COQE_viT5_total_Instruction0_SOPAL_v1
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:35:37+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_total_Instruction0_SOPAL_v1 This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_total_Instruction0_SOPAL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_total_Instruction0_SOPAL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# OrpoLlama3-8B ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/oa8hfBhbPfN6MPWVMJoLq.jpeg) This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 1.5k steps of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Muhammad2003/OrpoLlama3-8B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## 📈 Training curves Wandb Report ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/eFL8QhHbSjY45Ai2JQFj9.png) ## 🏆 Evaluation ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/E5XZI4Hiaw3C3gThvoKrH.png)
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["orpo"], "datasets": ["mlabonne/orpo-dpo-mix-40k"], "base_model": ["meta-llama/Meta-Llama-3-8B"]}
Muhammad2003/OrpoLlama3-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "orpo", "conversational", "en", "dataset:mlabonne/orpo-dpo-mix-40k", "base_model:meta-llama/Meta-Llama-3-8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:35:57+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #orpo #conversational #en #dataset-mlabonne/orpo-dpo-mix-40k #base_model-meta-llama/Meta-Llama-3-8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# OrpoLlama3-8B !image/jpeg This is an ORPO fine-tune of meta-llama/Meta-Llama-3-8B on 1.5k steps of mlabonne/orpo-dpo-mix-40k. ## Usage ## Training curves Wandb Report !image/png ## Evaluation !image/png
[ "# OrpoLlama3-8B\n\n\n!image/jpeg\n\nThis is an ORPO fine-tune of meta-llama/Meta-Llama-3-8B on 1.5k steps of mlabonne/orpo-dpo-mix-40k.", "## Usage", "## Training curves\n\nWandb Report\n\n\n!image/png", "## Evaluation\n\n!image/png" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #orpo #conversational #en #dataset-mlabonne/orpo-dpo-mix-40k #base_model-meta-llama/Meta-Llama-3-8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# OrpoLlama3-8B\n\n\n!image/jpeg\n\nThis is an ORPO fine-tune of meta-llama/Meta-Llama-3-8B on 1.5k steps of mlabonne/orpo-dpo-mix-40k.", "## Usage", "## Training curves\n\nWandb Report\n\n\n!image/png", "## Evaluation\n\n!image/png" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold3 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window16-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window16-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0008 - Accuracy: 0.6562 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.495 | 1.0 | 923 | 1.4345 | 0.5207 | | 1.3175 | 2.0 | 1846 | 1.2061 | 0.5953 | | 1.2659 | 3.0 | 2769 | 1.1111 | 0.6154 | | 1.0775 | 4.0 | 3692 | 1.0599 | 0.6343 | | 0.9198 | 5.0 | 4615 | 1.0400 | 0.6443 | | 0.9183 | 6.0 | 5538 | 1.0344 | 0.6440 | | 0.869 | 7.0 | 6461 | 1.0165 | 0.6546 | | 0.8871 | 8.0 | 7384 | 1.0264 | 0.6451 | | 0.9047 | 9.0 | 8307 | 0.9953 | 0.6567 | | 0.8868 | 10.0 | 9230 | 1.0008 | 0.6562 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swinv2-tiny-patch4-window16-256", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold3", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6562077360021639, "name": "Accuracy"}]}]}]}
onizukal/Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold3
null
[ "transformers", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-tiny-patch4-window16-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:36:12+00:00
[]
[]
TAGS #transformers #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-tiny-patch4-window16-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Boya1\_RMSProp\_1-e5\_10Epoch\_swinv2-tiny-patch4-window16-256\_fold3 ===================================================================== This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window16-256 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 1.0008 * Accuracy: 0.6562 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.35.0 * Pytorch 2.1.0 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#transformers #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-tiny-patch4-window16-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
text-generation
transformers
# llama-lexi-star-uncensored-8b-slerp llama-lexi-star-uncensored-8b-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1](https://huggingface.co/Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1) * [liminerity/llama-3-8b-silent-star](https://huggingface.co/liminerity/llama-3-8b-silent-star) ## 🧩 Configuration ```yaml slices: - sources: - model: Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 layer_range: [0, 32] - model: liminerity/llama-3-8b-silent-star layer_range: [0, 32] merge_method: slerp base_model: Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "liminerity/llama-lexi-star-uncensored-8b-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1", "liminerity/llama-3-8b-silent-star"], "base_model": ["Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1", "liminerity/llama-3-8b-silent-star"]}
liminerity/llama-lexi-star-uncensored-8b-slerp
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1", "liminerity/llama-3-8b-silent-star", "conversational", "base_model:Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1", "base_model:liminerity/llama-3-8b-silent-star", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:36:37+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #liminerity/llama-3-8b-silent-star #conversational #base_model-Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #base_model-liminerity/llama-3-8b-silent-star #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# llama-lexi-star-uncensored-8b-slerp llama-lexi-star-uncensored-8b-slerp is a merge of the following models using LazyMergekit: * Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 * liminerity/llama-3-8b-silent-star ## Configuration ## Usage
[ "# llama-lexi-star-uncensored-8b-slerp\n\nllama-lexi-star-uncensored-8b-slerp is a merge of the following models using LazyMergekit:\n* Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1\n* liminerity/llama-3-8b-silent-star", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #liminerity/llama-3-8b-silent-star #conversational #base_model-Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #base_model-liminerity/llama-3-8b-silent-star #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# llama-lexi-star-uncensored-8b-slerp\n\nllama-lexi-star-uncensored-8b-slerp is a merge of the following models using LazyMergekit:\n* Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1\n* liminerity/llama-3-8b-silent-star", "## Configuration", "## Usage" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Andro9669/flan-t5-ner
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:38:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
mjbuehler/Phi-3-mini-V200_NOINST_29040
null
[ "transformers", "safetensors", "phi3", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:39:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #phi3 #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #phi3 #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/hp_all_STEP0000030
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:41:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Joy10/bert-fine-tuned-cola This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5055 - Validation Loss: 0.4208 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.5055 | 0.4208 | 0 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-cased", "model-index": [{"name": "Joy10/bert-fine-tuned-cola", "results": []}]}
Joy10/bert-fine-tuned-cola
null
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:43:49+00:00
[]
[]
TAGS #transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Joy10/bert-fine-tuned-cola ========================== This model is a fine-tuned version of bert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.5055 * Validation Loss: 0.4208 * Epoch: 0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': 2e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.40.0 * TensorFlow 2.15.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.5635 - F1 Score: 0.7205 - Accuracy: 0.7210 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6564 | 3.92 | 200 | 0.6224 | 0.6385 | 0.6432 | | 0.6217 | 7.84 | 400 | 0.6094 | 0.6614 | 0.6642 | | 0.6073 | 11.76 | 600 | 0.5943 | 0.6889 | 0.6889 | | 0.5948 | 15.69 | 800 | 0.5874 | 0.6786 | 0.6790 | | 0.5833 | 19.61 | 1000 | 0.5781 | 0.6951 | 0.6951 | | 0.5724 | 23.53 | 1200 | 0.5694 | 0.6999 | 0.7 | | 0.5626 | 27.45 | 1400 | 0.5663 | 0.7110 | 0.7111 | | 0.5543 | 31.37 | 1600 | 0.5589 | 0.7146 | 0.7148 | | 0.5518 | 35.29 | 1800 | 0.5574 | 0.7096 | 0.7099 | | 0.5444 | 39.22 | 2000 | 0.5594 | 0.7062 | 0.7062 | | 0.5396 | 43.14 | 2200 | 0.5516 | 0.7146 | 0.7148 | | 0.5383 | 47.06 | 2400 | 0.5514 | 0.7175 | 0.7185 | | 0.529 | 50.98 | 2600 | 0.5583 | 0.7196 | 0.7198 | | 0.5288 | 54.9 | 2800 | 0.5527 | 0.7209 | 0.7222 | | 0.5224 | 58.82 | 3000 | 0.5505 | 0.7300 | 0.7309 | | 0.524 | 62.75 | 3200 | 0.5464 | 0.7146 | 0.7148 | | 0.517 | 66.67 | 3400 | 0.5473 | 0.7144 | 0.7148 | | 0.516 | 70.59 | 3600 | 0.5467 | 0.7099 | 0.7099 | | 0.5154 | 74.51 | 3800 | 0.5446 | 0.7219 | 0.7222 | | 0.5083 | 78.43 | 4000 | 0.5455 | 0.7254 | 0.7259 | | 0.5097 | 82.35 | 4200 | 0.5412 | 0.7212 | 0.7222 | | 0.5092 | 86.27 | 4400 | 0.5490 | 0.7148 | 0.7148 | | 0.5063 | 90.2 | 4600 | 0.5409 | 0.7312 | 0.7321 | | 0.5021 | 94.12 | 4800 | 0.5503 | 0.7247 | 0.7247 | | 0.499 | 98.04 | 5000 | 0.5381 | 0.7235 | 0.7247 | | 0.4951 | 101.96 | 5200 | 0.5424 | 0.7244 | 0.7247 | | 0.4959 | 105.88 | 5400 | 0.5378 | 0.7266 | 0.7272 | | 0.4952 | 109.8 | 5600 | 0.5411 | 0.7282 | 0.7284 | | 0.4934 | 113.73 | 5800 | 0.5398 | 0.7261 | 0.7272 | | 0.4878 | 117.65 | 6000 | 0.5439 | 0.7229 | 0.7235 | | 0.49 | 121.57 | 6200 | 0.5387 | 0.7274 | 0.7284 | | 0.4839 | 125.49 | 6400 | 0.5420 | 0.7257 | 0.7259 | | 0.4836 | 129.41 | 6600 | 0.5420 | 0.7307 | 0.7309 | | 0.4809 | 133.33 | 6800 | 0.5407 | 0.7264 | 0.7272 | | 0.482 | 137.25 | 7000 | 0.5404 | 0.7367 | 0.7370 | | 0.4776 | 141.18 | 7200 | 0.5408 | 0.7277 | 0.7284 | | 0.4834 | 145.1 | 7400 | 0.5383 | 0.7292 | 0.7296 | | 0.4763 | 149.02 | 7600 | 0.5390 | 0.7279 | 0.7284 | | 0.4797 | 152.94 | 7800 | 0.5366 | 0.7241 | 0.7247 | | 0.4728 | 156.86 | 8000 | 0.5408 | 0.7251 | 0.7259 | | 0.4733 | 160.78 | 8200 | 0.5429 | 0.7331 | 0.7333 | | 0.476 | 164.71 | 8400 | 0.5407 | 0.7305 | 0.7309 | | 0.4738 | 168.63 | 8600 | 0.5379 | 0.7290 | 0.7296 | | 0.474 | 172.55 | 8800 | 0.5381 | 0.7315 | 0.7321 | | 0.4708 | 176.47 | 9000 | 0.5386 | 0.7304 | 0.7309 | | 0.4704 | 180.39 | 9200 | 0.5386 | 0.7254 | 0.7259 | | 0.4713 | 184.31 | 9400 | 0.5405 | 0.7357 | 0.7358 | | 0.4721 | 188.24 | 9600 | 0.5387 | 0.7319 | 0.7321 | | 0.4729 | 192.16 | 9800 | 0.5376 | 0.7317 | 0.7321 | | 0.4677 | 196.08 | 10000 | 0.5380 | 0.7305 | 0.7309 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_0-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:44:02+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_0-seqsight\_16384\_512\_22M-L1\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.5635 * F1 Score: 0.7205 * Accuracy: 0.7210 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.5997 - F1 Score: 0.7167 - Accuracy: 0.7173 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6394 | 3.92 | 200 | 0.6016 | 0.6585 | 0.6593 | | 0.5841 | 7.84 | 400 | 0.5833 | 0.6963 | 0.6963 | | 0.5603 | 11.76 | 600 | 0.5582 | 0.6959 | 0.6963 | | 0.5449 | 15.69 | 800 | 0.5542 | 0.7118 | 0.7136 | | 0.5321 | 19.61 | 1000 | 0.5486 | 0.7220 | 0.7222 | | 0.5229 | 23.53 | 1200 | 0.5510 | 0.7156 | 0.7185 | | 0.5101 | 27.45 | 1400 | 0.5456 | 0.7161 | 0.7173 | | 0.502 | 31.37 | 1600 | 0.5552 | 0.7308 | 0.7309 | | 0.5001 | 35.29 | 1800 | 0.5464 | 0.7279 | 0.7284 | | 0.4876 | 39.22 | 2000 | 0.5472 | 0.7268 | 0.7272 | | 0.4817 | 43.14 | 2200 | 0.5512 | 0.7293 | 0.7296 | | 0.4792 | 47.06 | 2400 | 0.5515 | 0.7200 | 0.7210 | | 0.4678 | 50.98 | 2600 | 0.5660 | 0.7355 | 0.7358 | | 0.4624 | 54.9 | 2800 | 0.5581 | 0.7278 | 0.7284 | | 0.4526 | 58.82 | 3000 | 0.5618 | 0.7355 | 0.7358 | | 0.4517 | 62.75 | 3200 | 0.5550 | 0.7357 | 0.7358 | | 0.4423 | 66.67 | 3400 | 0.5705 | 0.7354 | 0.7358 | | 0.4367 | 70.59 | 3600 | 0.5654 | 0.7308 | 0.7309 | | 0.4367 | 74.51 | 3800 | 0.5579 | 0.7374 | 0.7383 | | 0.4271 | 78.43 | 4000 | 0.5731 | 0.7345 | 0.7346 | | 0.424 | 82.35 | 4200 | 0.5756 | 0.7413 | 0.7432 | | 0.4235 | 86.27 | 4400 | 0.5632 | 0.7367 | 0.7370 | | 0.4196 | 90.2 | 4600 | 0.5668 | 0.7441 | 0.7444 | | 0.4111 | 94.12 | 4800 | 0.5751 | 0.7407 | 0.7407 | | 0.4096 | 98.04 | 5000 | 0.5687 | 0.7287 | 0.7296 | | 0.4011 | 101.96 | 5200 | 0.5765 | 0.7432 | 0.7432 | | 0.3974 | 105.88 | 5400 | 0.5788 | 0.7469 | 0.7469 | | 0.3932 | 109.8 | 5600 | 0.5931 | 0.7370 | 0.7370 | | 0.3941 | 113.73 | 5800 | 0.5861 | 0.7382 | 0.7383 | | 0.3867 | 117.65 | 6000 | 0.5959 | 0.7427 | 0.7432 | | 0.3884 | 121.57 | 6200 | 0.5960 | 0.7284 | 0.7284 | | 0.3814 | 125.49 | 6400 | 0.5965 | 0.7308 | 0.7309 | | 0.378 | 129.41 | 6600 | 0.6032 | 0.7371 | 0.7370 | | 0.3754 | 133.33 | 6800 | 0.5998 | 0.7382 | 0.7383 | | 0.3738 | 137.25 | 7000 | 0.6128 | 0.7404 | 0.7407 | | 0.3691 | 141.18 | 7200 | 0.6121 | 0.7407 | 0.7407 | | 0.3685 | 145.1 | 7400 | 0.6061 | 0.7391 | 0.7395 | | 0.3679 | 149.02 | 7600 | 0.6080 | 0.7408 | 0.7407 | | 0.3621 | 152.94 | 7800 | 0.6186 | 0.7327 | 0.7333 | | 0.3606 | 156.86 | 8000 | 0.6166 | 0.7354 | 0.7358 | | 0.3614 | 160.78 | 8200 | 0.6140 | 0.7341 | 0.7346 | | 0.364 | 164.71 | 8400 | 0.6149 | 0.7407 | 0.7407 | | 0.3583 | 168.63 | 8600 | 0.6186 | 0.7382 | 0.7383 | | 0.3594 | 172.55 | 8800 | 0.6170 | 0.7418 | 0.7420 | | 0.3545 | 176.47 | 9000 | 0.6191 | 0.7367 | 0.7370 | | 0.3482 | 180.39 | 9200 | 0.6245 | 0.7403 | 0.7407 | | 0.3565 | 184.31 | 9400 | 0.6209 | 0.7382 | 0.7383 | | 0.3529 | 188.24 | 9600 | 0.6239 | 0.7407 | 0.7407 | | 0.3537 | 192.16 | 9800 | 0.6203 | 0.7368 | 0.7370 | | 0.3489 | 196.08 | 10000 | 0.6220 | 0.7344 | 0.7346 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_0-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:45:29+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_0-seqsight\_16384\_512\_22M-L8\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.5997 * F1 Score: 0.7167 * Accuracy: 0.7173 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 1.0468 - F1 Score: 0.6972 - Accuracy: 0.6975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6263 | 3.92 | 200 | 0.5880 | 0.6701 | 0.6704 | | 0.5607 | 7.84 | 400 | 0.5651 | 0.7075 | 0.7074 | | 0.536 | 11.76 | 600 | 0.5482 | 0.7171 | 0.7185 | | 0.5152 | 15.69 | 800 | 0.5594 | 0.7142 | 0.7185 | | 0.4947 | 19.61 | 1000 | 0.5645 | 0.7309 | 0.7309 | | 0.4717 | 23.53 | 1200 | 0.5769 | 0.7318 | 0.7321 | | 0.443 | 27.45 | 1400 | 0.5844 | 0.7420 | 0.7420 | | 0.4217 | 31.37 | 1600 | 0.6111 | 0.7307 | 0.7309 | | 0.4108 | 35.29 | 1800 | 0.6075 | 0.7333 | 0.7333 | | 0.3866 | 39.22 | 2000 | 0.6402 | 0.7198 | 0.7198 | | 0.3733 | 43.14 | 2200 | 0.6276 | 0.7303 | 0.7309 | | 0.359 | 47.06 | 2400 | 0.6436 | 0.7263 | 0.7272 | | 0.3406 | 50.98 | 2600 | 0.6952 | 0.7270 | 0.7272 | | 0.3281 | 54.9 | 2800 | 0.6855 | 0.7235 | 0.7247 | | 0.3074 | 58.82 | 3000 | 0.7191 | 0.7254 | 0.7259 | | 0.3002 | 62.75 | 3200 | 0.7193 | 0.7246 | 0.7247 | | 0.2824 | 66.67 | 3400 | 0.7518 | 0.7242 | 0.7247 | | 0.2723 | 70.59 | 3600 | 0.7467 | 0.7210 | 0.7210 | | 0.2654 | 74.51 | 3800 | 0.7536 | 0.7109 | 0.7123 | | 0.2514 | 78.43 | 4000 | 0.7737 | 0.7307 | 0.7309 | | 0.2442 | 82.35 | 4200 | 0.8082 | 0.7271 | 0.7272 | | 0.2345 | 86.27 | 4400 | 0.7781 | 0.7230 | 0.7235 | | 0.2288 | 90.2 | 4600 | 0.8172 | 0.7233 | 0.7235 | | 0.2184 | 94.12 | 4800 | 0.8705 | 0.7296 | 0.7296 | | 0.2106 | 98.04 | 5000 | 0.8488 | 0.7295 | 0.7296 | | 0.2046 | 101.96 | 5200 | 0.8438 | 0.7193 | 0.7198 | | 0.198 | 105.88 | 5400 | 0.8727 | 0.7321 | 0.7321 | | 0.1937 | 109.8 | 5600 | 0.9359 | 0.7223 | 0.7222 | | 0.1874 | 113.73 | 5800 | 0.9022 | 0.7270 | 0.7272 | | 0.1748 | 117.65 | 6000 | 0.9720 | 0.7275 | 0.7284 | | 0.1851 | 121.57 | 6200 | 0.9062 | 0.7283 | 0.7284 | | 0.1712 | 125.49 | 6400 | 0.9613 | 0.7290 | 0.7296 | | 0.1672 | 129.41 | 6600 | 0.9403 | 0.7346 | 0.7346 | | 0.1641 | 133.33 | 6800 | 0.9332 | 0.7258 | 0.7259 | | 0.1602 | 137.25 | 7000 | 0.9433 | 0.7355 | 0.7358 | | 0.1522 | 141.18 | 7200 | 1.0043 | 0.7342 | 0.7346 | | 0.1536 | 145.1 | 7400 | 0.9667 | 0.7306 | 0.7309 | | 0.1514 | 149.02 | 7600 | 0.9953 | 0.7321 | 0.7321 | | 0.1477 | 152.94 | 7800 | 1.0024 | 0.7176 | 0.7185 | | 0.1456 | 156.86 | 8000 | 1.0120 | 0.7330 | 0.7333 | | 0.141 | 160.78 | 8200 | 1.0228 | 0.7431 | 0.7432 | | 0.1441 | 164.71 | 8400 | 1.0232 | 0.7395 | 0.7395 | | 0.1394 | 168.63 | 8600 | 1.0253 | 0.7367 | 0.7370 | | 0.138 | 172.55 | 8800 | 1.0213 | 0.7308 | 0.7309 | | 0.1402 | 176.47 | 9000 | 1.0117 | 0.7269 | 0.7272 | | 0.1335 | 180.39 | 9200 | 1.0230 | 0.7240 | 0.7247 | | 0.1363 | 184.31 | 9400 | 1.0253 | 0.7342 | 0.7346 | | 0.1356 | 188.24 | 9600 | 1.0213 | 0.7369 | 0.7370 | | 0.1351 | 192.16 | 9800 | 1.0196 | 0.7305 | 0.7309 | | 0.1334 | 196.08 | 10000 | 1.0227 | 0.7317 | 0.7321 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_0-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:46:52+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_0-seqsight\_16384\_512\_22M-L32\_f ============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset. It achieves the following results on the evaluation set: * Loss: 1.0468 * F1 Score: 0.6972 * Accuracy: 0.6975 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2586 - F1 Score: 0.8835 - Accuracy: 0.8836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5207 | 0.47 | 200 | 0.4244 | 0.7974 | 0.7985 | | 0.4191 | 0.95 | 400 | 0.3640 | 0.8336 | 0.8338 | | 0.3748 | 1.42 | 600 | 0.3333 | 0.8479 | 0.8480 | | 0.3603 | 1.9 | 800 | 0.3115 | 0.8617 | 0.8617 | | 0.3365 | 2.37 | 1000 | 0.3054 | 0.8665 | 0.8666 | | 0.3339 | 2.84 | 1200 | 0.2994 | 0.8685 | 0.8685 | | 0.3174 | 3.32 | 1400 | 0.2950 | 0.8698 | 0.8698 | | 0.326 | 3.79 | 1600 | 0.2910 | 0.8710 | 0.8710 | | 0.322 | 4.27 | 1800 | 0.2886 | 0.8710 | 0.8712 | | 0.3142 | 4.74 | 2000 | 0.2929 | 0.8715 | 0.8715 | | 0.3177 | 5.21 | 2200 | 0.2829 | 0.8736 | 0.8737 | | 0.3112 | 5.69 | 2400 | 0.2853 | 0.8752 | 0.8752 | | 0.31 | 6.16 | 2600 | 0.2785 | 0.8782 | 0.8783 | | 0.3046 | 6.64 | 2800 | 0.2764 | 0.8777 | 0.8778 | | 0.3052 | 7.11 | 3000 | 0.2813 | 0.8795 | 0.8795 | | 0.2977 | 7.58 | 3200 | 0.2804 | 0.8769 | 0.8769 | | 0.2955 | 8.06 | 3400 | 0.2682 | 0.8829 | 0.8830 | | 0.2918 | 8.53 | 3600 | 0.2724 | 0.8837 | 0.8838 | | 0.2966 | 9.0 | 3800 | 0.2643 | 0.8853 | 0.8854 | | 0.2891 | 9.48 | 4000 | 0.2637 | 0.8836 | 0.8838 | | 0.2887 | 9.95 | 4200 | 0.2616 | 0.8852 | 0.8852 | | 0.2876 | 10.43 | 4400 | 0.2599 | 0.8861 | 0.8861 | | 0.2861 | 10.9 | 4600 | 0.2610 | 0.8854 | 0.8854 | | 0.2805 | 11.37 | 4800 | 0.2586 | 0.8870 | 0.8870 | | 0.2885 | 11.85 | 5000 | 0.2579 | 0.8871 | 0.8872 | | 0.279 | 12.32 | 5200 | 0.2650 | 0.8818 | 0.8818 | | 0.2872 | 12.8 | 5400 | 0.2591 | 0.8854 | 0.8854 | | 0.2859 | 13.27 | 5600 | 0.2552 | 0.8900 | 0.8900 | | 0.2788 | 13.74 | 5800 | 0.2563 | 0.8886 | 0.8887 | | 0.2797 | 14.22 | 6000 | 0.2555 | 0.8901 | 0.8901 | | 0.2761 | 14.69 | 6200 | 0.2562 | 0.8900 | 0.8900 | | 0.2843 | 15.17 | 6400 | 0.2537 | 0.8911 | 0.8912 | | 0.281 | 15.64 | 6600 | 0.2543 | 0.8901 | 0.8901 | | 0.2766 | 16.11 | 6800 | 0.2530 | 0.8906 | 0.8906 | | 0.2729 | 16.59 | 7000 | 0.2518 | 0.8928 | 0.8928 | | 0.2761 | 17.06 | 7200 | 0.2532 | 0.8913 | 0.8913 | | 0.2746 | 17.54 | 7400 | 0.2535 | 0.8897 | 0.8897 | | 0.2766 | 18.01 | 7600 | 0.2522 | 0.8928 | 0.8928 | | 0.2704 | 18.48 | 7800 | 0.2537 | 0.8882 | 0.8882 | | 0.278 | 18.96 | 8000 | 0.2511 | 0.8913 | 0.8913 | | 0.2738 | 19.43 | 8200 | 0.2520 | 0.8912 | 0.8912 | | 0.2696 | 19.91 | 8400 | 0.2523 | 0.8926 | 0.8927 | | 0.276 | 20.38 | 8600 | 0.2510 | 0.8920 | 0.8921 | | 0.2732 | 20.85 | 8800 | 0.2513 | 0.8932 | 0.8933 | | 0.2723 | 21.33 | 9000 | 0.2506 | 0.8920 | 0.8921 | | 0.272 | 21.8 | 9200 | 0.2518 | 0.8913 | 0.8913 | | 0.2753 | 22.27 | 9400 | 0.2511 | 0.8906 | 0.8906 | | 0.27 | 22.75 | 9600 | 0.2511 | 0.8931 | 0.8931 | | 0.2741 | 23.22 | 9800 | 0.2515 | 0.8913 | 0.8913 | | 0.2747 | 23.7 | 10000 | 0.2511 | 0.8917 | 0.8918 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_1-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:46:52+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_1-seqsight\_16384\_512\_22M-L1\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.2586 * F1 Score: 0.8835 * Accuracy: 0.8836 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
image-text-to-text
xtuner
# nullt3r/llava-llama-3-8b-v1_1-Q8_0-GGUF This model was converted to GGUF format from [`xtuner/llava-llama-3-8b-v1_1`](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo nullt3r/llava-llama-3-8b-v1_1-Q8_0-GGUF --model llava-llama-3-8b-v1_1.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo nullt3r/llava-llama-3-8b-v1_1-Q8_0-GGUF --model llava-llama-3-8b-v1_1.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llava-llama-3-8b-v1_1.Q8_0.gguf -n 128 ```
{"library_name": "xtuner", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Lin-Chen/ShareGPT4V"], "pipeline_tag": "image-text-to-text"}
nullt3r/llava-llama-3-8b-v1_1-Q8_0-GGUF
null
[ "xtuner", "gguf", "llama-cpp", "gguf-my-repo", "image-text-to-text", "dataset:Lin-Chen/ShareGPT4V", "region:us" ]
null
2024-04-27T08:47:03+00:00
[]
[]
TAGS #xtuner #gguf #llama-cpp #gguf-my-repo #image-text-to-text #dataset-Lin-Chen/ShareGPT4V #region-us
# nullt3r/llava-llama-3-8b-v1_1-Q8_0-GGUF This model was converted to GGUF format from 'xtuner/llava-llama-3-8b-v1_1' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# nullt3r/llava-llama-3-8b-v1_1-Q8_0-GGUF\nThis model was converted to GGUF format from 'xtuner/llava-llama-3-8b-v1_1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#xtuner #gguf #llama-cpp #gguf-my-repo #image-text-to-text #dataset-Lin-Chen/ShareGPT4V #region-us \n", "# nullt3r/llava-llama-3-8b-v1_1-Q8_0-GGUF\nThis model was converted to GGUF format from 'xtuner/llava-llama-3-8b-v1_1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2437 - F1 Score: 0.8923 - Accuracy: 0.8924 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4767 | 0.47 | 200 | 0.3561 | 0.8367 | 0.8369 | | 0.3515 | 0.95 | 400 | 0.3060 | 0.8652 | 0.8652 | | 0.3248 | 1.42 | 600 | 0.2997 | 0.8689 | 0.8689 | | 0.3262 | 1.9 | 800 | 0.2849 | 0.8739 | 0.8740 | | 0.3086 | 2.37 | 1000 | 0.2851 | 0.8756 | 0.8756 | | 0.3066 | 2.84 | 1200 | 0.2785 | 0.8781 | 0.8781 | | 0.2922 | 3.32 | 1400 | 0.2750 | 0.8809 | 0.8809 | | 0.3007 | 3.79 | 1600 | 0.2737 | 0.8796 | 0.8796 | | 0.2944 | 4.27 | 1800 | 0.2640 | 0.8852 | 0.8852 | | 0.2874 | 4.74 | 2000 | 0.2741 | 0.8803 | 0.8804 | | 0.2901 | 5.21 | 2200 | 0.2590 | 0.8854 | 0.8855 | | 0.2851 | 5.69 | 2400 | 0.2642 | 0.8823 | 0.8823 | | 0.2826 | 6.16 | 2600 | 0.2554 | 0.8893 | 0.8894 | | 0.2775 | 6.64 | 2800 | 0.2556 | 0.8875 | 0.8876 | | 0.2792 | 7.11 | 3000 | 0.2627 | 0.8861 | 0.8861 | | 0.2683 | 7.58 | 3200 | 0.2628 | 0.8854 | 0.8854 | | 0.2684 | 8.06 | 3400 | 0.2478 | 0.8922 | 0.8922 | | 0.2635 | 8.53 | 3600 | 0.2552 | 0.8900 | 0.8900 | | 0.2697 | 9.0 | 3800 | 0.2481 | 0.8928 | 0.8928 | | 0.2586 | 9.48 | 4000 | 0.2448 | 0.8924 | 0.8925 | | 0.2628 | 9.95 | 4200 | 0.2417 | 0.8947 | 0.8947 | | 0.2588 | 10.43 | 4400 | 0.2436 | 0.8946 | 0.8946 | | 0.2597 | 10.9 | 4600 | 0.2454 | 0.8928 | 0.8928 | | 0.2524 | 11.37 | 4800 | 0.2446 | 0.8930 | 0.8930 | | 0.2614 | 11.85 | 5000 | 0.2406 | 0.8948 | 0.8949 | | 0.253 | 12.32 | 5200 | 0.2517 | 0.8901 | 0.8901 | | 0.2563 | 12.8 | 5400 | 0.2459 | 0.8940 | 0.8940 | | 0.2594 | 13.27 | 5600 | 0.2411 | 0.8943 | 0.8943 | | 0.2503 | 13.74 | 5800 | 0.2395 | 0.8948 | 0.8949 | | 0.2517 | 14.22 | 6000 | 0.2416 | 0.8949 | 0.8949 | | 0.2498 | 14.69 | 6200 | 0.2394 | 0.8966 | 0.8967 | | 0.2543 | 15.17 | 6400 | 0.2400 | 0.8967 | 0.8967 | | 0.2535 | 15.64 | 6600 | 0.2396 | 0.8961 | 0.8961 | | 0.2471 | 16.11 | 6800 | 0.2388 | 0.8969 | 0.8970 | | 0.2464 | 16.59 | 7000 | 0.2375 | 0.8964 | 0.8964 | | 0.2471 | 17.06 | 7200 | 0.2391 | 0.8980 | 0.8980 | | 0.2462 | 17.54 | 7400 | 0.2388 | 0.8968 | 0.8968 | | 0.2464 | 18.01 | 7600 | 0.2365 | 0.8984 | 0.8984 | | 0.2426 | 18.48 | 7800 | 0.2423 | 0.8958 | 0.8958 | | 0.2494 | 18.96 | 8000 | 0.2364 | 0.8983 | 0.8983 | | 0.2439 | 19.43 | 8200 | 0.2368 | 0.8974 | 0.8974 | | 0.2412 | 19.91 | 8400 | 0.2376 | 0.8977 | 0.8977 | | 0.2476 | 20.38 | 8600 | 0.2362 | 0.8987 | 0.8987 | | 0.2432 | 20.85 | 8800 | 0.2373 | 0.8963 | 0.8964 | | 0.2417 | 21.33 | 9000 | 0.2371 | 0.8983 | 0.8983 | | 0.2422 | 21.8 | 9200 | 0.2384 | 0.8984 | 0.8984 | | 0.2472 | 22.27 | 9400 | 0.2367 | 0.8976 | 0.8976 | | 0.2393 | 22.75 | 9600 | 0.2363 | 0.8977 | 0.8977 | | 0.2441 | 23.22 | 9800 | 0.2365 | 0.8983 | 0.8983 | | 0.2437 | 23.7 | 10000 | 0.2365 | 0.8986 | 0.8986 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_1-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:47:33+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_1-seqsight\_16384\_512\_22M-L8\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.2437 * F1 Score: 0.8923 * Accuracy: 0.8924 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_total_Instruction0_SOPAL_v1_h1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_total_Instruction0_SOPAL_v1_h1", "results": []}]}
ThuyNT/CS505_COQE_viT5_total_Instruction0_SOPAL_v1_h1
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:47:44+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_total_Instruction0_SOPAL_v1_h1 This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_total_Instruction0_SOPAL_v1_h1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 25\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_total_Instruction0_SOPAL_v1_h1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 25\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2345 - F1 Score: 0.8972 - Accuracy: 0.8973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4498 | 0.47 | 200 | 0.3274 | 0.8542 | 0.8543 | | 0.3336 | 0.95 | 400 | 0.2971 | 0.8719 | 0.8719 | | 0.312 | 1.42 | 600 | 0.2770 | 0.8788 | 0.8789 | | 0.3121 | 1.9 | 800 | 0.2705 | 0.8816 | 0.8817 | | 0.2937 | 2.37 | 1000 | 0.2703 | 0.8827 | 0.8827 | | 0.2882 | 2.84 | 1200 | 0.2592 | 0.8882 | 0.8882 | | 0.2719 | 3.32 | 1400 | 0.2574 | 0.8910 | 0.8910 | | 0.2787 | 3.79 | 1600 | 0.2588 | 0.8884 | 0.8884 | | 0.271 | 4.27 | 1800 | 0.2445 | 0.8925 | 0.8927 | | 0.264 | 4.74 | 2000 | 0.2548 | 0.8910 | 0.8910 | | 0.264 | 5.21 | 2200 | 0.2403 | 0.8947 | 0.8949 | | 0.2588 | 5.69 | 2400 | 0.2511 | 0.8913 | 0.8913 | | 0.2558 | 6.16 | 2600 | 0.2398 | 0.8958 | 0.8959 | | 0.2536 | 6.64 | 2800 | 0.2412 | 0.8919 | 0.8921 | | 0.2537 | 7.11 | 3000 | 0.2462 | 0.8953 | 0.8953 | | 0.2422 | 7.58 | 3200 | 0.2513 | 0.8922 | 0.8922 | | 0.2462 | 8.06 | 3400 | 0.2359 | 0.8978 | 0.8979 | | 0.241 | 8.53 | 3600 | 0.2387 | 0.8993 | 0.8993 | | 0.2473 | 9.0 | 3800 | 0.2374 | 0.8972 | 0.8973 | | 0.239 | 9.48 | 4000 | 0.2336 | 0.8980 | 0.8981 | | 0.2427 | 9.95 | 4200 | 0.2301 | 0.9011 | 0.9011 | | 0.2367 | 10.43 | 4400 | 0.2347 | 0.8989 | 0.8989 | | 0.2387 | 10.9 | 4600 | 0.2353 | 0.8989 | 0.8989 | | 0.2315 | 11.37 | 4800 | 0.2392 | 0.8990 | 0.8990 | | 0.2412 | 11.85 | 5000 | 0.2326 | 0.9014 | 0.9014 | | 0.2322 | 12.32 | 5200 | 0.2400 | 0.8964 | 0.8964 | | 0.2343 | 12.8 | 5400 | 0.2410 | 0.8986 | 0.8986 | | 0.2376 | 13.27 | 5600 | 0.2354 | 0.8974 | 0.8974 | | 0.2298 | 13.74 | 5800 | 0.2317 | 0.8995 | 0.8996 | | 0.2307 | 14.22 | 6000 | 0.2321 | 0.9026 | 0.9026 | | 0.2265 | 14.69 | 6200 | 0.2314 | 0.9033 | 0.9033 | | 0.231 | 15.17 | 6400 | 0.2351 | 0.8974 | 0.8974 | | 0.2314 | 15.64 | 6600 | 0.2340 | 0.9002 | 0.9002 | | 0.2238 | 16.11 | 6800 | 0.2328 | 0.9011 | 0.9011 | | 0.2234 | 16.59 | 7000 | 0.2332 | 0.8992 | 0.8992 | | 0.2255 | 17.06 | 7200 | 0.2301 | 0.9001 | 0.9001 | | 0.2232 | 17.54 | 7400 | 0.2314 | 0.9018 | 0.9019 | | 0.2228 | 18.01 | 7600 | 0.2298 | 0.9027 | 0.9027 | | 0.2196 | 18.48 | 7800 | 0.2374 | 0.8990 | 0.8990 | | 0.2255 | 18.96 | 8000 | 0.2281 | 0.9035 | 0.9035 | | 0.2188 | 19.43 | 8200 | 0.2303 | 0.9016 | 0.9016 | | 0.2188 | 19.91 | 8400 | 0.2309 | 0.9020 | 0.9020 | | 0.223 | 20.38 | 8600 | 0.2293 | 0.9012 | 0.9013 | | 0.2183 | 20.85 | 8800 | 0.2316 | 0.9012 | 0.9013 | | 0.2194 | 21.33 | 9000 | 0.2299 | 0.9017 | 0.9017 | | 0.2175 | 21.8 | 9200 | 0.2308 | 0.9024 | 0.9024 | | 0.2228 | 22.27 | 9400 | 0.2284 | 0.9027 | 0.9027 | | 0.214 | 22.75 | 9600 | 0.2297 | 0.9018 | 0.9019 | | 0.2182 | 23.22 | 9800 | 0.2298 | 0.9029 | 0.9029 | | 0.2183 | 23.7 | 10000 | 0.2300 | 0.9027 | 0.9027 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_1-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:48:55+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_1-seqsight\_16384\_512\_22M-L32\_f ============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.2345 * F1 Score: 0.8972 * Accuracy: 0.8973 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/hp_all_STEP0000040
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:49:57+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.6011 - F1 Score: 0.6739 - Accuracy: 0.6745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6645 | 1.69 | 200 | 0.6442 | 0.6188 | 0.6251 | | 0.6401 | 3.39 | 400 | 0.6242 | 0.6371 | 0.6389 | | 0.6233 | 5.08 | 600 | 0.6153 | 0.6554 | 0.6553 | | 0.6166 | 6.78 | 800 | 0.6090 | 0.6629 | 0.6628 | | 0.6102 | 8.47 | 1000 | 0.6059 | 0.6633 | 0.6638 | | 0.603 | 10.17 | 1200 | 0.6066 | 0.6606 | 0.6622 | | 0.5969 | 11.86 | 1400 | 0.5961 | 0.6825 | 0.6824 | | 0.5947 | 13.56 | 1600 | 0.5966 | 0.6744 | 0.6745 | | 0.593 | 15.25 | 1800 | 0.5914 | 0.6809 | 0.6808 | | 0.5878 | 16.95 | 2000 | 0.5894 | 0.6942 | 0.6941 | | 0.5855 | 18.64 | 2200 | 0.5913 | 0.6795 | 0.6803 | | 0.5855 | 20.34 | 2400 | 0.5864 | 0.6913 | 0.6914 | | 0.5845 | 22.03 | 2600 | 0.5862 | 0.6898 | 0.6899 | | 0.5797 | 23.73 | 2800 | 0.5839 | 0.6884 | 0.6883 | | 0.578 | 25.42 | 3000 | 0.5847 | 0.6908 | 0.6909 | | 0.5771 | 27.12 | 3200 | 0.6002 | 0.6552 | 0.6601 | | 0.5788 | 28.81 | 3400 | 0.5861 | 0.6797 | 0.6803 | | 0.5723 | 30.51 | 3600 | 0.5885 | 0.6805 | 0.6814 | | 0.574 | 32.2 | 3800 | 0.5826 | 0.6889 | 0.6888 | | 0.5729 | 33.9 | 4000 | 0.5884 | 0.6755 | 0.6766 | | 0.5699 | 35.59 | 4200 | 0.5891 | 0.6749 | 0.6766 | | 0.5733 | 37.29 | 4400 | 0.5829 | 0.6925 | 0.6925 | | 0.5695 | 38.98 | 4600 | 0.5812 | 0.6936 | 0.6936 | | 0.5646 | 40.68 | 4800 | 0.5845 | 0.6844 | 0.6851 | | 0.5636 | 42.37 | 5000 | 0.5834 | 0.6871 | 0.6872 | | 0.5662 | 44.07 | 5200 | 0.5818 | 0.6936 | 0.6936 | | 0.5675 | 45.76 | 5400 | 0.5840 | 0.6862 | 0.6872 | | 0.567 | 47.46 | 5600 | 0.5833 | 0.6920 | 0.6920 | | 0.5701 | 49.15 | 5800 | 0.5860 | 0.6756 | 0.6776 | | 0.5645 | 50.85 | 6000 | 0.5798 | 0.6869 | 0.6872 | | 0.5639 | 52.54 | 6200 | 0.5845 | 0.6799 | 0.6808 | | 0.5624 | 54.24 | 6400 | 0.5798 | 0.6941 | 0.6941 | | 0.5656 | 55.93 | 6600 | 0.5785 | 0.6969 | 0.6968 | | 0.5645 | 57.63 | 6800 | 0.5783 | 0.6952 | 0.6952 | | 0.563 | 59.32 | 7000 | 0.5829 | 0.6842 | 0.6851 | | 0.5612 | 61.02 | 7200 | 0.5776 | 0.6947 | 0.6946 | | 0.5629 | 62.71 | 7400 | 0.5806 | 0.6933 | 0.6936 | | 0.5614 | 64.41 | 7600 | 0.5800 | 0.6936 | 0.6936 | | 0.5609 | 66.1 | 7800 | 0.5798 | 0.6950 | 0.6952 | | 0.5601 | 67.8 | 8000 | 0.5780 | 0.6963 | 0.6962 | | 0.5588 | 69.49 | 8200 | 0.5805 | 0.6924 | 0.6925 | | 0.5574 | 71.19 | 8400 | 0.5796 | 0.6924 | 0.6925 | | 0.5598 | 72.88 | 8600 | 0.5779 | 0.6969 | 0.6968 | | 0.5559 | 74.58 | 8800 | 0.5798 | 0.6919 | 0.6920 | | 0.5603 | 76.27 | 9000 | 0.5791 | 0.6925 | 0.6925 | | 0.5579 | 77.97 | 9200 | 0.5790 | 0.6909 | 0.6909 | | 0.556 | 79.66 | 9400 | 0.5787 | 0.6931 | 0.6930 | | 0.5595 | 81.36 | 9600 | 0.5788 | 0.6920 | 0.6920 | | 0.5581 | 83.05 | 9800 | 0.5787 | 0.6925 | 0.6925 | | 0.5571 | 84.75 | 10000 | 0.5790 | 0.6925 | 0.6925 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_4-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:50:29+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_4-seqsight\_16384\_512\_22M-L1\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.6011 * F1 Score: 0.6739 * Accuracy: 0.6745 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.6171 - F1 Score: 0.6829 - Accuracy: 0.6830 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6535 | 1.69 | 200 | 0.6284 | 0.6305 | 0.6330 | | 0.6223 | 3.39 | 400 | 0.6105 | 0.6579 | 0.6591 | | 0.6023 | 5.08 | 600 | 0.5991 | 0.6740 | 0.6745 | | 0.5923 | 6.78 | 800 | 0.5908 | 0.6829 | 0.6835 | | 0.5875 | 8.47 | 1000 | 0.5855 | 0.6836 | 0.6835 | | 0.5815 | 10.17 | 1200 | 0.5897 | 0.6752 | 0.6766 | | 0.576 | 11.86 | 1400 | 0.5817 | 0.6915 | 0.6920 | | 0.574 | 13.56 | 1600 | 0.5799 | 0.6921 | 0.6920 | | 0.5694 | 15.25 | 1800 | 0.5817 | 0.7023 | 0.7037 | | 0.566 | 16.95 | 2000 | 0.5768 | 0.6915 | 0.6920 | | 0.5617 | 18.64 | 2200 | 0.5776 | 0.6923 | 0.6925 | | 0.56 | 20.34 | 2400 | 0.5748 | 0.7059 | 0.7058 | | 0.5595 | 22.03 | 2600 | 0.5940 | 0.6750 | 0.6792 | | 0.5535 | 23.73 | 2800 | 0.5723 | 0.7017 | 0.7015 | | 0.5511 | 25.42 | 3000 | 0.5741 | 0.7008 | 0.7010 | | 0.5486 | 27.12 | 3200 | 0.5916 | 0.6752 | 0.6792 | | 0.5489 | 28.81 | 3400 | 0.5721 | 0.7043 | 0.7042 | | 0.542 | 30.51 | 3600 | 0.5723 | 0.7032 | 0.7031 | | 0.5438 | 32.2 | 3800 | 0.5729 | 0.7016 | 0.7015 | | 0.5383 | 33.9 | 4000 | 0.5808 | 0.6904 | 0.6914 | | 0.5344 | 35.59 | 4200 | 0.5862 | 0.6839 | 0.6867 | | 0.5358 | 37.29 | 4400 | 0.5750 | 0.7047 | 0.7047 | | 0.5334 | 38.98 | 4600 | 0.5685 | 0.7032 | 0.7031 | | 0.5244 | 40.68 | 4800 | 0.5772 | 0.7050 | 0.7058 | | 0.5227 | 42.37 | 5000 | 0.5742 | 0.7069 | 0.7069 | | 0.5249 | 44.07 | 5200 | 0.5741 | 0.7070 | 0.7069 | | 0.524 | 45.76 | 5400 | 0.5694 | 0.7117 | 0.7116 | | 0.5217 | 47.46 | 5600 | 0.5760 | 0.7127 | 0.7127 | | 0.5237 | 49.15 | 5800 | 0.5696 | 0.7050 | 0.7053 | | 0.5171 | 50.85 | 6000 | 0.5713 | 0.7077 | 0.7079 | | 0.5173 | 52.54 | 6200 | 0.5708 | 0.7111 | 0.7111 | | 0.5126 | 54.24 | 6400 | 0.5731 | 0.7106 | 0.7106 | | 0.5143 | 55.93 | 6600 | 0.5712 | 0.7112 | 0.7111 | | 0.5146 | 57.63 | 6800 | 0.5705 | 0.7162 | 0.7164 | | 0.5071 | 59.32 | 7000 | 0.5744 | 0.7137 | 0.7138 | | 0.507 | 61.02 | 7200 | 0.5708 | 0.7133 | 0.7132 | | 0.5057 | 62.71 | 7400 | 0.5675 | 0.7149 | 0.7148 | | 0.5059 | 64.41 | 7600 | 0.5719 | 0.7112 | 0.7111 | | 0.5017 | 66.1 | 7800 | 0.5708 | 0.7149 | 0.7148 | | 0.5033 | 67.8 | 8000 | 0.5740 | 0.7164 | 0.7169 | | 0.501 | 69.49 | 8200 | 0.5729 | 0.7168 | 0.7169 | | 0.4989 | 71.19 | 8400 | 0.5722 | 0.7101 | 0.7100 | | 0.501 | 72.88 | 8600 | 0.5726 | 0.7126 | 0.7127 | | 0.4974 | 74.58 | 8800 | 0.5719 | 0.7111 | 0.7111 | | 0.4996 | 76.27 | 9000 | 0.5721 | 0.7133 | 0.7132 | | 0.4969 | 77.97 | 9200 | 0.5719 | 0.7106 | 0.7106 | | 0.4957 | 79.66 | 9400 | 0.5733 | 0.7131 | 0.7132 | | 0.4961 | 81.36 | 9600 | 0.5729 | 0.7106 | 0.7106 | | 0.4951 | 83.05 | 9800 | 0.5729 | 0.7085 | 0.7084 | | 0.4945 | 84.75 | 10000 | 0.5731 | 0.7090 | 0.7090 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_4-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:52:01+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_4-seqsight\_16384\_512\_22M-L8\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.6171 * F1 Score: 0.6829 * Accuracy: 0.6830 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-stutteringdetection This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the stuttering dataset. It achieves the following results on the evaluation set: - Loss: 0.5717 - Accuracy: 0.9024 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8357 | 1.0 | 92 | 0.7812 | 0.8659 | | 0.2951 | 2.0 | 184 | 0.3680 | 0.8902 | | 0.097 | 3.0 | 276 | 0.4000 | 0.8659 | | 0.0872 | 4.0 | 368 | 0.3953 | 0.9024 | | 0.4557 | 5.0 | 460 | 0.4904 | 0.9024 | | 0.0368 | 6.0 | 552 | 0.4972 | 0.9024 | | 0.0074 | 7.0 | 644 | 0.5408 | 0.9146 | | 0.0039 | 8.0 | 736 | 0.5460 | 0.9024 | | 0.0036 | 9.0 | 828 | 0.5684 | 0.9024 | | 0.0035 | 10.0 | 920 | 0.5717 | 0.9024 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["HareemFatima/stutteringdetection"], "metrics": ["accuracy"], "base_model": "ntu-spml/distilhubert", "model-index": [{"name": "distilhubert-finetuned-stutteringdetection", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "stuttering", "type": "HareemFatima/stutteringdetection"}, "metrics": [{"type": "accuracy", "value": 0.9024390243902439, "name": "Accuracy"}]}]}]}
HareemFatima/distilhubert-finetuned-stutterdetection
null
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:HareemFatima/stutteringdetection", "base_model:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-27T08:52:48+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #hubert #audio-classification #generated_from_trainer #dataset-HareemFatima/stutteringdetection #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
distilhubert-finetuned-stutteringdetection ========================================== This model is a fine-tuned version of ntu-spml/distilhubert on the stuttering dataset. It achieves the following results on the evaluation set: * Loss: 0.5717 * Accuracy: 0.9024 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #hubert #audio-classification #generated_from_trainer #dataset-HareemFatima/stutteringdetection #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.6255 - F1 Score: 0.6676 - Accuracy: 0.6681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6442 | 1.69 | 200 | 0.6142 | 0.6502 | 0.6506 | | 0.6097 | 3.39 | 400 | 0.6025 | 0.6643 | 0.6660 | | 0.5887 | 5.08 | 600 | 0.5891 | 0.6941 | 0.6941 | | 0.5769 | 6.78 | 800 | 0.5898 | 0.6858 | 0.6888 | | 0.5708 | 8.47 | 1000 | 0.5811 | 0.6832 | 0.6840 | | 0.5602 | 10.17 | 1200 | 0.5839 | 0.6834 | 0.6845 | | 0.5498 | 11.86 | 1400 | 0.5770 | 0.6938 | 0.6946 | | 0.5427 | 13.56 | 1600 | 0.5732 | 0.6972 | 0.6973 | | 0.532 | 15.25 | 1800 | 0.5853 | 0.7089 | 0.7100 | | 0.5246 | 16.95 | 2000 | 0.5735 | 0.7015 | 0.7026 | | 0.5147 | 18.64 | 2200 | 0.5812 | 0.6958 | 0.6968 | | 0.5087 | 20.34 | 2400 | 0.5806 | 0.7043 | 0.7042 | | 0.5004 | 22.03 | 2600 | 0.6028 | 0.6901 | 0.6920 | | 0.4897 | 23.73 | 2800 | 0.5830 | 0.7053 | 0.7053 | | 0.4826 | 25.42 | 3000 | 0.5912 | 0.7023 | 0.7037 | | 0.4753 | 27.12 | 3200 | 0.6060 | 0.6832 | 0.6845 | | 0.4712 | 28.81 | 3400 | 0.6045 | 0.6966 | 0.6973 | | 0.4616 | 30.51 | 3600 | 0.6023 | 0.7047 | 0.7047 | | 0.4613 | 32.2 | 3800 | 0.6148 | 0.7064 | 0.7063 | | 0.452 | 33.9 | 4000 | 0.6174 | 0.7010 | 0.7010 | | 0.4426 | 35.59 | 4200 | 0.6314 | 0.6923 | 0.6936 | | 0.4417 | 37.29 | 4400 | 0.6398 | 0.7009 | 0.7010 | | 0.4338 | 38.98 | 4600 | 0.6250 | 0.6990 | 0.6989 | | 0.4208 | 40.68 | 4800 | 0.6326 | 0.6934 | 0.6936 | | 0.4207 | 42.37 | 5000 | 0.6399 | 0.6974 | 0.6973 | | 0.4196 | 44.07 | 5200 | 0.6558 | 0.6920 | 0.6920 | | 0.4148 | 45.76 | 5400 | 0.6534 | 0.6995 | 0.6994 | | 0.4105 | 47.46 | 5600 | 0.6697 | 0.6937 | 0.6936 | | 0.4052 | 49.15 | 5800 | 0.6619 | 0.6894 | 0.6893 | | 0.4024 | 50.85 | 6000 | 0.6680 | 0.6872 | 0.6872 | | 0.3997 | 52.54 | 6200 | 0.6687 | 0.6945 | 0.6946 | | 0.3922 | 54.24 | 6400 | 0.6793 | 0.6915 | 0.6920 | | 0.3899 | 55.93 | 6600 | 0.6936 | 0.6889 | 0.6888 | | 0.3878 | 57.63 | 6800 | 0.6648 | 0.6931 | 0.6930 | | 0.3831 | 59.32 | 7000 | 0.6911 | 0.6930 | 0.6930 | | 0.3774 | 61.02 | 7200 | 0.6868 | 0.6937 | 0.6936 | | 0.3768 | 62.71 | 7400 | 0.6846 | 0.6883 | 0.6883 | | 0.3747 | 64.41 | 7600 | 0.7037 | 0.6926 | 0.6925 | | 0.3709 | 66.1 | 7800 | 0.7056 | 0.6868 | 0.6867 | | 0.3676 | 67.8 | 8000 | 0.7123 | 0.6868 | 0.6872 | | 0.3688 | 69.49 | 8200 | 0.7095 | 0.6904 | 0.6904 | | 0.3665 | 71.19 | 8400 | 0.7124 | 0.6852 | 0.6851 | | 0.3639 | 72.88 | 8600 | 0.7155 | 0.6867 | 0.6867 | | 0.3599 | 74.58 | 8800 | 0.7138 | 0.6883 | 0.6883 | | 0.3638 | 76.27 | 9000 | 0.7179 | 0.6888 | 0.6888 | | 0.3555 | 77.97 | 9200 | 0.7185 | 0.6905 | 0.6904 | | 0.3558 | 79.66 | 9400 | 0.7219 | 0.6910 | 0.6909 | | 0.3562 | 81.36 | 9600 | 0.7225 | 0.6914 | 0.6914 | | 0.3503 | 83.05 | 9800 | 0.7262 | 0.6899 | 0.6899 | | 0.3538 | 84.75 | 10000 | 0.7271 | 0.6862 | 0.6861 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_4-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:53:29+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_4-seqsight\_16384\_512\_22M-L32\_f ============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.6255 * F1 Score: 0.6676 * Accuracy: 0.6681 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.6449 - F1 Score: 0.8032 - Accuracy: 0.8033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6227 | 13.33 | 200 | 0.5259 | 0.7343 | 0.7364 | | 0.5385 | 26.67 | 400 | 0.4620 | 0.7774 | 0.7782 | | 0.4836 | 40.0 | 600 | 0.4452 | 0.8026 | 0.8033 | | 0.436 | 53.33 | 800 | 0.4247 | 0.8030 | 0.8033 | | 0.4031 | 66.67 | 1000 | 0.4354 | 0.8147 | 0.8159 | | 0.3797 | 80.0 | 1200 | 0.4339 | 0.7988 | 0.7992 | | 0.3566 | 93.33 | 1400 | 0.4335 | 0.8031 | 0.8033 | | 0.334 | 106.67 | 1600 | 0.4571 | 0.8069 | 0.8075 | | 0.3176 | 120.0 | 1800 | 0.4663 | 0.8026 | 0.8033 | | 0.3078 | 133.33 | 2000 | 0.4810 | 0.8065 | 0.8075 | | 0.2979 | 146.67 | 2200 | 0.4668 | 0.8149 | 0.8159 | | 0.2921 | 160.0 | 2400 | 0.4999 | 0.7969 | 0.7992 | | 0.2757 | 173.33 | 2600 | 0.4878 | 0.8108 | 0.8117 | | 0.264 | 186.67 | 2800 | 0.4892 | 0.8112 | 0.8117 | | 0.2574 | 200.0 | 3000 | 0.5113 | 0.8026 | 0.8033 | | 0.2526 | 213.33 | 3200 | 0.4938 | 0.8114 | 0.8117 | | 0.2447 | 226.67 | 3400 | 0.5226 | 0.8067 | 0.8075 | | 0.2377 | 240.0 | 3600 | 0.5326 | 0.8071 | 0.8075 | | 0.2274 | 253.33 | 3800 | 0.5275 | 0.8115 | 0.8117 | | 0.2185 | 266.67 | 4000 | 0.5259 | 0.8115 | 0.8117 | | 0.2171 | 280.0 | 4200 | 0.5551 | 0.8026 | 0.8033 | | 0.2086 | 293.33 | 4400 | 0.5611 | 0.8067 | 0.8075 | | 0.2075 | 306.67 | 4600 | 0.5746 | 0.8108 | 0.8117 | | 0.203 | 320.0 | 4800 | 0.5510 | 0.8028 | 0.8033 | | 0.1948 | 333.33 | 5000 | 0.5798 | 0.7942 | 0.7950 | | 0.1945 | 346.67 | 5200 | 0.5596 | 0.8028 | 0.8033 | | 0.1894 | 360.0 | 5400 | 0.5673 | 0.8030 | 0.8033 | | 0.1858 | 373.33 | 5600 | 0.5887 | 0.7907 | 0.7908 | | 0.1775 | 386.67 | 5800 | 0.6021 | 0.7944 | 0.7950 | | 0.1806 | 400.0 | 6000 | 0.5913 | 0.7903 | 0.7908 | | 0.1743 | 413.33 | 6200 | 0.5862 | 0.7988 | 0.7992 | | 0.1738 | 426.67 | 6400 | 0.5910 | 0.7988 | 0.7992 | | 0.1686 | 440.0 | 6600 | 0.5969 | 0.7988 | 0.7992 | | 0.1692 | 453.33 | 6800 | 0.6030 | 0.8030 | 0.8033 | | 0.1637 | 466.67 | 7000 | 0.6082 | 0.7991 | 0.7992 | | 0.1599 | 480.0 | 7200 | 0.6129 | 0.7865 | 0.7866 | | 0.1592 | 493.33 | 7400 | 0.6125 | 0.8033 | 0.8033 | | 0.1579 | 506.67 | 7600 | 0.6296 | 0.8073 | 0.8075 | | 0.158 | 520.0 | 7800 | 0.6281 | 0.8030 | 0.8033 | | 0.1551 | 533.33 | 8000 | 0.6195 | 0.8116 | 0.8117 | | 0.1567 | 546.67 | 8200 | 0.6295 | 0.8155 | 0.8159 | | 0.154 | 560.0 | 8400 | 0.6250 | 0.8115 | 0.8117 | | 0.1517 | 573.33 | 8600 | 0.6295 | 0.8073 | 0.8075 | | 0.1529 | 586.67 | 8800 | 0.6249 | 0.8199 | 0.8201 | | 0.1509 | 600.0 | 9000 | 0.6235 | 0.8033 | 0.8033 | | 0.1486 | 613.33 | 9200 | 0.6299 | 0.8115 | 0.8117 | | 0.1487 | 626.67 | 9400 | 0.6317 | 0.8032 | 0.8033 | | 0.1502 | 640.0 | 9600 | 0.6266 | 0.8073 | 0.8075 | | 0.1485 | 653.33 | 9800 | 0.6304 | 0.8115 | 0.8117 | | 0.1456 | 666.67 | 10000 | 0.6292 | 0.8073 | 0.8075 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_3-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:53:30+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_3-seqsight\_16384\_512\_22M-L1\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.6449 * F1 Score: 0.8032 * Accuracy: 0.8033 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.7940 - F1 Score: 0.8447 - Accuracy: 0.8452 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5818 | 13.33 | 200 | 0.4453 | 0.7948 | 0.7950 | | 0.4301 | 26.67 | 400 | 0.4297 | 0.8106 | 0.8117 | | 0.3549 | 40.0 | 600 | 0.4200 | 0.8200 | 0.8201 | | 0.2973 | 53.33 | 800 | 0.4637 | 0.8272 | 0.8285 | | 0.2662 | 66.67 | 1000 | 0.5280 | 0.8046 | 0.8075 | | 0.2336 | 80.0 | 1200 | 0.5160 | 0.8153 | 0.8159 | | 0.2145 | 93.33 | 1400 | 0.5109 | 0.8281 | 0.8285 | | 0.1911 | 106.67 | 1600 | 0.5495 | 0.8197 | 0.8201 | | 0.1777 | 120.0 | 1800 | 0.5476 | 0.7987 | 0.7992 | | 0.1646 | 133.33 | 2000 | 0.6371 | 0.7880 | 0.7908 | | 0.1505 | 146.67 | 2200 | 0.5463 | 0.8108 | 0.8117 | | 0.141 | 160.0 | 2400 | 0.5784 | 0.8149 | 0.8159 | | 0.1268 | 173.33 | 2600 | 0.5710 | 0.8408 | 0.8410 | | 0.1155 | 186.67 | 2800 | 0.5945 | 0.8367 | 0.8368 | | 0.1109 | 200.0 | 3000 | 0.5784 | 0.8410 | 0.8410 | | 0.1061 | 213.33 | 3200 | 0.5774 | 0.8534 | 0.8536 | | 0.0977 | 226.67 | 3400 | 0.6396 | 0.8449 | 0.8452 | | 0.097 | 240.0 | 3600 | 0.6888 | 0.8284 | 0.8285 | | 0.0888 | 253.33 | 3800 | 0.6826 | 0.8450 | 0.8452 | | 0.0771 | 266.67 | 4000 | 0.6878 | 0.8492 | 0.8494 | | 0.0806 | 280.0 | 4200 | 0.6557 | 0.8408 | 0.8410 | | 0.0765 | 293.33 | 4400 | 0.6868 | 0.8405 | 0.8410 | | 0.0699 | 306.67 | 4600 | 0.7310 | 0.8493 | 0.8494 | | 0.0679 | 320.0 | 4800 | 0.6940 | 0.8410 | 0.8410 | | 0.0653 | 333.33 | 5000 | 0.7225 | 0.8491 | 0.8494 | | 0.0615 | 346.67 | 5200 | 0.7379 | 0.8491 | 0.8494 | | 0.0607 | 360.0 | 5400 | 0.7678 | 0.8322 | 0.8326 | | 0.0558 | 373.33 | 5600 | 0.7789 | 0.8367 | 0.8368 | | 0.0536 | 386.67 | 5800 | 0.8346 | 0.8446 | 0.8452 | | 0.0549 | 400.0 | 6000 | 0.7734 | 0.8408 | 0.8410 | | 0.0502 | 413.33 | 6200 | 0.7813 | 0.8493 | 0.8494 | | 0.0557 | 426.67 | 6400 | 0.7285 | 0.8575 | 0.8577 | | 0.0453 | 440.0 | 6600 | 0.8123 | 0.8450 | 0.8452 | | 0.0515 | 453.33 | 6800 | 0.7763 | 0.8575 | 0.8577 | | 0.0446 | 466.67 | 7000 | 0.7729 | 0.8493 | 0.8494 | | 0.0458 | 480.0 | 7200 | 0.7779 | 0.8450 | 0.8452 | | 0.0439 | 493.33 | 7400 | 0.7898 | 0.8368 | 0.8368 | | 0.0461 | 506.67 | 7600 | 0.8091 | 0.8450 | 0.8452 | | 0.0449 | 520.0 | 7800 | 0.8044 | 0.8491 | 0.8494 | | 0.0444 | 533.33 | 8000 | 0.7947 | 0.8408 | 0.8410 | | 0.0403 | 546.67 | 8200 | 0.8372 | 0.8449 | 0.8452 | | 0.0429 | 560.0 | 8400 | 0.8311 | 0.8449 | 0.8452 | | 0.0423 | 573.33 | 8600 | 0.8076 | 0.8451 | 0.8452 | | 0.0406 | 586.67 | 8800 | 0.8227 | 0.8533 | 0.8536 | | 0.0397 | 600.0 | 9000 | 0.8339 | 0.8491 | 0.8494 | | 0.0374 | 613.33 | 9200 | 0.8464 | 0.8449 | 0.8452 | | 0.04 | 626.67 | 9400 | 0.8335 | 0.8491 | 0.8494 | | 0.0407 | 640.0 | 9600 | 0.8229 | 0.8450 | 0.8452 | | 0.0377 | 653.33 | 9800 | 0.8410 | 0.8449 | 0.8452 | | 0.0386 | 666.67 | 10000 | 0.8276 | 0.8449 | 0.8452 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_3-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:53:59+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_3-seqsight\_16384\_512\_22M-L8\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.7940 * F1 Score: 0.8447 * Accuracy: 0.8452 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.8993 - F1 Score: 0.8409 - Accuracy: 0.8410 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5266 | 13.33 | 200 | 0.3982 | 0.8242 | 0.8243 | | 0.35 | 26.67 | 400 | 0.4361 | 0.8192 | 0.8201 | | 0.2628 | 40.0 | 600 | 0.4681 | 0.8451 | 0.8452 | | 0.2005 | 53.33 | 800 | 0.5248 | 0.8278 | 0.8285 | | 0.1675 | 66.67 | 1000 | 0.5468 | 0.8322 | 0.8326 | | 0.1296 | 80.0 | 1200 | 0.5736 | 0.8200 | 0.8201 | | 0.1139 | 93.33 | 1400 | 0.6281 | 0.8242 | 0.8243 | | 0.0923 | 106.67 | 1600 | 0.7864 | 0.8226 | 0.8243 | | 0.0825 | 120.0 | 1800 | 0.6495 | 0.8282 | 0.8285 | | 0.0699 | 133.33 | 2000 | 0.7029 | 0.8278 | 0.8285 | | 0.0566 | 146.67 | 2200 | 0.7612 | 0.8240 | 0.8243 | | 0.055 | 160.0 | 2400 | 0.7407 | 0.8274 | 0.8285 | | 0.0488 | 173.33 | 2600 | 0.7233 | 0.8241 | 0.8243 | | 0.042 | 186.67 | 2800 | 0.7600 | 0.8493 | 0.8494 | | 0.0406 | 200.0 | 3000 | 0.7998 | 0.8159 | 0.8159 | | 0.0363 | 213.33 | 3200 | 0.7662 | 0.8368 | 0.8368 | | 0.0335 | 226.67 | 3400 | 0.8254 | 0.8243 | 0.8243 | | 0.0354 | 240.0 | 3600 | 0.8436 | 0.8241 | 0.8243 | | 0.0307 | 253.33 | 3800 | 0.8122 | 0.8368 | 0.8368 | | 0.023 | 266.67 | 4000 | 0.8677 | 0.8326 | 0.8326 | | 0.0255 | 280.0 | 4200 | 0.9085 | 0.8159 | 0.8159 | | 0.0259 | 293.33 | 4400 | 0.8586 | 0.8199 | 0.8201 | | 0.0259 | 306.67 | 4600 | 0.8769 | 0.8159 | 0.8159 | | 0.0223 | 320.0 | 4800 | 0.8891 | 0.8367 | 0.8368 | | 0.0205 | 333.33 | 5000 | 0.9187 | 0.8324 | 0.8326 | | 0.0173 | 346.67 | 5200 | 0.9073 | 0.8325 | 0.8326 | | 0.0193 | 360.0 | 5400 | 0.8803 | 0.8365 | 0.8368 | | 0.0168 | 373.33 | 5600 | 0.8920 | 0.8410 | 0.8410 | | 0.0153 | 386.67 | 5800 | 0.9922 | 0.8282 | 0.8285 | | 0.016 | 400.0 | 6000 | 0.9730 | 0.8116 | 0.8117 | | 0.0156 | 413.33 | 6200 | 1.0085 | 0.8156 | 0.8159 | | 0.0155 | 426.67 | 6400 | 0.9479 | 0.8114 | 0.8117 | | 0.0147 | 440.0 | 6600 | 0.9176 | 0.8114 | 0.8117 | | 0.016 | 453.33 | 6800 | 0.8997 | 0.8200 | 0.8201 | | 0.013 | 466.67 | 7000 | 0.9789 | 0.8326 | 0.8326 | | 0.0125 | 480.0 | 7200 | 0.9769 | 0.8326 | 0.8326 | | 0.0126 | 493.33 | 7400 | 0.9434 | 0.8326 | 0.8326 | | 0.0108 | 506.67 | 7600 | 1.0108 | 0.8285 | 0.8285 | | 0.0128 | 520.0 | 7800 | 0.9395 | 0.8283 | 0.8285 | | 0.0118 | 533.33 | 8000 | 0.9746 | 0.8240 | 0.8243 | | 0.0092 | 546.67 | 8200 | 1.0324 | 0.8241 | 0.8243 | | 0.0123 | 560.0 | 8400 | 0.9384 | 0.8367 | 0.8368 | | 0.01 | 573.33 | 8600 | 0.9883 | 0.8325 | 0.8326 | | 0.0106 | 586.67 | 8800 | 1.0023 | 0.8325 | 0.8326 | | 0.0107 | 600.0 | 9000 | 0.9692 | 0.8240 | 0.8243 | | 0.0083 | 613.33 | 9200 | 0.9892 | 0.8325 | 0.8326 | | 0.0083 | 626.67 | 9400 | 0.9902 | 0.8367 | 0.8368 | | 0.011 | 640.0 | 9600 | 0.9960 | 0.8368 | 0.8368 | | 0.0082 | 653.33 | 9800 | 0.9793 | 0.8409 | 0.8410 | | 0.0105 | 666.67 | 10000 | 0.9793 | 0.8409 | 0.8410 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_3-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:54:45+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_3-seqsight\_16384\_512\_22M-L32\_f ============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.8993 * F1 Score: 0.8409 * Accuracy: 0.8410 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Andro9669/t5-ner
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:55:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
siacus/Llama-3-8B-tweets-10
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T08:56:06+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4210 - F1 Score: 0.8567 - Accuracy: 0.8567 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4428 | 9.52 | 200 | 0.3414 | 0.8475 | 0.8476 | | 0.3343 | 19.05 | 400 | 0.3365 | 0.8407 | 0.8415 | | 0.2965 | 28.57 | 600 | 0.3178 | 0.8688 | 0.8689 | | 0.2711 | 38.1 | 800 | 0.3228 | 0.8597 | 0.8598 | | 0.2544 | 47.62 | 1000 | 0.3130 | 0.8506 | 0.8506 | | 0.2381 | 57.14 | 1200 | 0.3218 | 0.8596 | 0.8598 | | 0.2263 | 66.67 | 1400 | 0.3147 | 0.8567 | 0.8567 | | 0.2167 | 76.19 | 1600 | 0.3256 | 0.8564 | 0.8567 | | 0.2062 | 85.71 | 1800 | 0.3272 | 0.8564 | 0.8567 | | 0.197 | 95.24 | 2000 | 0.3177 | 0.8628 | 0.8628 | | 0.1882 | 104.76 | 2200 | 0.3257 | 0.8658 | 0.8659 | | 0.1802 | 114.29 | 2400 | 0.3357 | 0.8659 | 0.8659 | | 0.1786 | 123.81 | 2600 | 0.3291 | 0.8658 | 0.8659 | | 0.1733 | 133.33 | 2800 | 0.3512 | 0.8474 | 0.8476 | | 0.1663 | 142.86 | 3000 | 0.3306 | 0.8628 | 0.8628 | | 0.1619 | 152.38 | 3200 | 0.3402 | 0.8536 | 0.8537 | | 0.1572 | 161.9 | 3400 | 0.3509 | 0.8534 | 0.8537 | | 0.1508 | 171.43 | 3600 | 0.3477 | 0.8627 | 0.8628 | | 0.1502 | 180.95 | 3800 | 0.3449 | 0.8658 | 0.8659 | | 0.1432 | 190.48 | 4000 | 0.3475 | 0.8597 | 0.8598 | | 0.1382 | 200.0 | 4200 | 0.3617 | 0.8657 | 0.8659 | | 0.1379 | 209.52 | 4400 | 0.3404 | 0.8719 | 0.8720 | | 0.1335 | 219.05 | 4600 | 0.3454 | 0.8719 | 0.8720 | | 0.131 | 228.57 | 4800 | 0.3593 | 0.8688 | 0.8689 | | 0.1305 | 238.1 | 5000 | 0.3712 | 0.8718 | 0.8720 | | 0.1264 | 247.62 | 5200 | 0.3574 | 0.8658 | 0.8659 | | 0.1236 | 257.14 | 5400 | 0.3616 | 0.8658 | 0.8659 | | 0.1228 | 266.67 | 5600 | 0.3769 | 0.8626 | 0.8628 | | 0.1209 | 276.19 | 5800 | 0.3600 | 0.8658 | 0.8659 | | 0.1164 | 285.71 | 6000 | 0.3763 | 0.8626 | 0.8628 | | 0.1153 | 295.24 | 6200 | 0.3627 | 0.8627 | 0.8628 | | 0.1143 | 304.76 | 6400 | 0.3730 | 0.8627 | 0.8628 | | 0.1104 | 314.29 | 6600 | 0.3826 | 0.8719 | 0.8720 | | 0.113 | 323.81 | 6800 | 0.3738 | 0.8627 | 0.8628 | | 0.107 | 333.33 | 7000 | 0.3782 | 0.8718 | 0.8720 | | 0.1102 | 342.86 | 7200 | 0.3708 | 0.8749 | 0.875 | | 0.1038 | 352.38 | 7400 | 0.3719 | 0.8749 | 0.875 | | 0.1064 | 361.9 | 7600 | 0.3751 | 0.8688 | 0.8689 | | 0.1061 | 371.43 | 7800 | 0.3803 | 0.8718 | 0.8720 | | 0.1051 | 380.95 | 8000 | 0.3799 | 0.8718 | 0.8720 | | 0.1035 | 390.48 | 8200 | 0.3796 | 0.8718 | 0.8720 | | 0.1017 | 400.0 | 8400 | 0.3828 | 0.8718 | 0.8720 | | 0.1003 | 409.52 | 8600 | 0.3778 | 0.8749 | 0.875 | | 0.0996 | 419.05 | 8800 | 0.3786 | 0.8750 | 0.875 | | 0.1005 | 428.57 | 9000 | 0.3830 | 0.8718 | 0.8720 | | 0.101 | 438.1 | 9200 | 0.3836 | 0.8718 | 0.8720 | | 0.0961 | 447.62 | 9400 | 0.3840 | 0.8718 | 0.8720 | | 0.0971 | 457.14 | 9600 | 0.3813 | 0.8749 | 0.875 | | 0.0968 | 466.67 | 9800 | 0.3824 | 0.8749 | 0.875 | | 0.0975 | 476.19 | 10000 | 0.3843 | 0.8718 | 0.8720 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_2-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:56:10+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_2-seqsight\_16384\_512\_22M-L1\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.4210 * F1 Score: 0.8567 * Accuracy: 0.8567 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4164 - F1 Score: 0.8719 - Accuracy: 0.8720 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3997 | 9.52 | 200 | 0.3155 | 0.8594 | 0.8598 | | 0.2795 | 19.05 | 400 | 0.2910 | 0.8779 | 0.8780 | | 0.2392 | 28.57 | 600 | 0.2782 | 0.8841 | 0.8841 | | 0.2094 | 38.1 | 800 | 0.2874 | 0.8750 | 0.875 | | 0.1877 | 47.62 | 1000 | 0.2986 | 0.8749 | 0.875 | | 0.1683 | 57.14 | 1200 | 0.3442 | 0.8713 | 0.8720 | | 0.1554 | 66.67 | 1400 | 0.3778 | 0.8744 | 0.875 | | 0.1464 | 76.19 | 1600 | 0.3928 | 0.8683 | 0.8689 | | 0.1344 | 85.71 | 1800 | 0.3672 | 0.8930 | 0.8933 | | 0.125 | 95.24 | 2000 | 0.3655 | 0.8838 | 0.8841 | | 0.1126 | 104.76 | 2200 | 0.3630 | 0.8931 | 0.8933 | | 0.1014 | 114.29 | 2400 | 0.4538 | 0.8930 | 0.8933 | | 0.1011 | 123.81 | 2600 | 0.3930 | 0.8931 | 0.8933 | | 0.0924 | 133.33 | 2800 | 0.4071 | 0.8899 | 0.8902 | | 0.0862 | 142.86 | 3000 | 0.4078 | 0.8962 | 0.8963 | | 0.08 | 152.38 | 3200 | 0.4487 | 0.8868 | 0.8872 | | 0.0767 | 161.9 | 3400 | 0.5149 | 0.8806 | 0.8811 | | 0.0709 | 171.43 | 3600 | 0.4059 | 0.8962 | 0.8963 | | 0.069 | 180.95 | 3800 | 0.4635 | 0.8808 | 0.8811 | | 0.0625 | 190.48 | 4000 | 0.4600 | 0.8901 | 0.8902 | | 0.0573 | 200.0 | 4200 | 0.5042 | 0.8807 | 0.8811 | | 0.0581 | 209.52 | 4400 | 0.4973 | 0.8776 | 0.8780 | | 0.0581 | 219.05 | 4600 | 0.4788 | 0.8777 | 0.8780 | | 0.054 | 228.57 | 4800 | 0.5444 | 0.8744 | 0.875 | | 0.0545 | 238.1 | 5000 | 0.4845 | 0.8900 | 0.8902 | | 0.0509 | 247.62 | 5200 | 0.4610 | 0.8932 | 0.8933 | | 0.046 | 257.14 | 5400 | 0.5035 | 0.8962 | 0.8963 | | 0.0447 | 266.67 | 5600 | 0.5513 | 0.8807 | 0.8811 | | 0.0425 | 276.19 | 5800 | 0.5364 | 0.8870 | 0.8872 | | 0.0425 | 285.71 | 6000 | 0.5192 | 0.8870 | 0.8872 | | 0.0384 | 295.24 | 6200 | 0.5858 | 0.8869 | 0.8872 | | 0.0401 | 304.76 | 6400 | 0.5853 | 0.8838 | 0.8841 | | 0.0395 | 314.29 | 6600 | 0.5985 | 0.8807 | 0.8811 | | 0.0392 | 323.81 | 6800 | 0.5633 | 0.8807 | 0.8811 | | 0.0371 | 333.33 | 7000 | 0.5896 | 0.8745 | 0.875 | | 0.0368 | 342.86 | 7200 | 0.5585 | 0.8838 | 0.8841 | | 0.0358 | 352.38 | 7400 | 0.5254 | 0.8931 | 0.8933 | | 0.0373 | 361.9 | 7600 | 0.5235 | 0.8870 | 0.8872 | | 0.0346 | 371.43 | 7800 | 0.5312 | 0.8900 | 0.8902 | | 0.0331 | 380.95 | 8000 | 0.6006 | 0.8807 | 0.8811 | | 0.0304 | 390.48 | 8200 | 0.5686 | 0.8870 | 0.8872 | | 0.0318 | 400.0 | 8400 | 0.5678 | 0.8869 | 0.8872 | | 0.0308 | 409.52 | 8600 | 0.5983 | 0.8807 | 0.8811 | | 0.0298 | 419.05 | 8800 | 0.5908 | 0.8807 | 0.8811 | | 0.0315 | 428.57 | 9000 | 0.5852 | 0.8777 | 0.8780 | | 0.0307 | 438.1 | 9200 | 0.5532 | 0.8808 | 0.8811 | | 0.0288 | 447.62 | 9400 | 0.5655 | 0.8870 | 0.8872 | | 0.0285 | 457.14 | 9600 | 0.5867 | 0.8808 | 0.8811 | | 0.0294 | 466.67 | 9800 | 0.5711 | 0.8808 | 0.8811 | | 0.0272 | 476.19 | 10000 | 0.5798 | 0.8808 | 0.8811 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_2-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:56:23+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_2-seqsight\_16384\_512\_22M-L8\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.4164 * F1 Score: 0.8719 * Accuracy: 0.8720 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-Cartpole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
HusseinEid/Reinforce-Cartpole-v1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-27T08:57:20+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.3136 - F1 Score: 0.8628 - Accuracy: 0.8628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3675 | 9.52 | 200 | 0.2939 | 0.8719 | 0.8720 | | 0.2394 | 19.05 | 400 | 0.2925 | 0.8809 | 0.8811 | | 0.1887 | 28.57 | 600 | 0.2900 | 0.8902 | 0.8902 | | 0.1513 | 38.1 | 800 | 0.3103 | 0.8750 | 0.875 | | 0.1232 | 47.62 | 1000 | 0.4024 | 0.8748 | 0.875 | | 0.1012 | 57.14 | 1200 | 0.4141 | 0.8686 | 0.8689 | | 0.0864 | 66.67 | 1400 | 0.5371 | 0.8683 | 0.8689 | | 0.0737 | 76.19 | 1600 | 0.5424 | 0.8745 | 0.875 | | 0.0604 | 85.71 | 1800 | 0.6019 | 0.8776 | 0.8780 | | 0.0523 | 95.24 | 2000 | 0.5280 | 0.8749 | 0.875 | | 0.0464 | 104.76 | 2200 | 0.5558 | 0.8780 | 0.8780 | | 0.0381 | 114.29 | 2400 | 0.6441 | 0.8839 | 0.8841 | | 0.036 | 123.81 | 2600 | 0.6390 | 0.8718 | 0.8720 | | 0.0323 | 133.33 | 2800 | 0.5986 | 0.8719 | 0.8720 | | 0.0271 | 142.86 | 3000 | 0.7002 | 0.8809 | 0.8811 | | 0.0271 | 152.38 | 3200 | 0.6997 | 0.8748 | 0.875 | | 0.0257 | 161.9 | 3400 | 0.7475 | 0.8838 | 0.8841 | | 0.0215 | 171.43 | 3600 | 0.7352 | 0.8779 | 0.8780 | | 0.0227 | 180.95 | 3800 | 0.7311 | 0.8778 | 0.8780 | | 0.0178 | 190.48 | 4000 | 0.7539 | 0.8749 | 0.875 | | 0.0169 | 200.0 | 4200 | 0.7203 | 0.8718 | 0.8720 | | 0.0174 | 209.52 | 4400 | 0.7283 | 0.8779 | 0.8780 | | 0.0154 | 219.05 | 4600 | 0.7179 | 0.8810 | 0.8811 | | 0.015 | 228.57 | 4800 | 0.7887 | 0.8656 | 0.8659 | | 0.0155 | 238.1 | 5000 | 0.7739 | 0.8718 | 0.8720 | | 0.0131 | 247.62 | 5200 | 0.7593 | 0.8719 | 0.8720 | | 0.0132 | 257.14 | 5400 | 0.7224 | 0.8779 | 0.8780 | | 0.0111 | 266.67 | 5600 | 0.7515 | 0.8749 | 0.875 | | 0.0126 | 276.19 | 5800 | 0.7008 | 0.8811 | 0.8811 | | 0.0093 | 285.71 | 6000 | 0.7463 | 0.8719 | 0.8720 | | 0.0082 | 295.24 | 6200 | 0.7215 | 0.8811 | 0.8811 | | 0.0102 | 304.76 | 6400 | 0.7556 | 0.8687 | 0.8689 | | 0.0081 | 314.29 | 6600 | 0.7973 | 0.8779 | 0.8780 | | 0.0101 | 323.81 | 6800 | 0.7145 | 0.8688 | 0.8689 | | 0.0078 | 333.33 | 7000 | 0.7828 | 0.8657 | 0.8659 | | 0.009 | 342.86 | 7200 | 0.7628 | 0.8749 | 0.875 | | 0.0092 | 352.38 | 7400 | 0.7076 | 0.8750 | 0.875 | | 0.0077 | 361.9 | 7600 | 0.7738 | 0.8658 | 0.8659 | | 0.0086 | 371.43 | 7800 | 0.7825 | 0.8658 | 0.8659 | | 0.0074 | 380.95 | 8000 | 0.7797 | 0.8749 | 0.875 | | 0.0056 | 390.48 | 8200 | 0.8023 | 0.8689 | 0.8689 | | 0.0057 | 400.0 | 8400 | 0.8784 | 0.8778 | 0.8780 | | 0.0067 | 409.52 | 8600 | 0.8161 | 0.8719 | 0.8720 | | 0.006 | 419.05 | 8800 | 0.7977 | 0.8689 | 0.8689 | | 0.0066 | 428.57 | 9000 | 0.8332 | 0.8657 | 0.8659 | | 0.0069 | 438.1 | 9200 | 0.7704 | 0.8658 | 0.8659 | | 0.006 | 447.62 | 9400 | 0.7767 | 0.8748 | 0.875 | | 0.0063 | 457.14 | 9600 | 0.7809 | 0.8748 | 0.875 | | 0.0061 | 466.67 | 9800 | 0.7697 | 0.8687 | 0.8689 | | 0.0045 | 476.19 | 10000 | 0.7637 | 0.8718 | 0.8720 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_2-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:58:12+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_2-seqsight\_16384\_512\_22M-L32\_f ============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.3136 * F1 Score: 0.8628 * Accuracy: 0.8628 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/hp_all_STEP0000050
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T08:58:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.3949 - F1 Score: 0.8432 - Accuracy: 0.8426 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9544 | 0.7 | 200 | 0.8989 | 0.5038 | 0.5783 | | 0.7744 | 1.4 | 400 | 0.6254 | 0.7144 | 0.7135 | | 0.5998 | 2.1 | 600 | 0.5328 | 0.7683 | 0.7674 | | 0.5445 | 2.8 | 800 | 0.5231 | 0.7736 | 0.7725 | | 0.5252 | 3.5 | 1000 | 0.4931 | 0.7867 | 0.7858 | | 0.5101 | 4.2 | 1200 | 0.4983 | 0.7889 | 0.7876 | | 0.5005 | 4.9 | 1400 | 0.4710 | 0.7992 | 0.7986 | | 0.4848 | 5.59 | 1600 | 0.4855 | 0.7969 | 0.7966 | | 0.4811 | 6.29 | 1800 | 0.4811 | 0.7985 | 0.7975 | | 0.4696 | 6.99 | 2000 | 0.4671 | 0.8037 | 0.8027 | | 0.458 | 7.69 | 2200 | 0.4568 | 0.8088 | 0.8086 | | 0.461 | 8.39 | 2400 | 0.4699 | 0.8054 | 0.8045 | | 0.4517 | 9.09 | 2600 | 0.4603 | 0.8085 | 0.8073 | | 0.4486 | 9.79 | 2800 | 0.4553 | 0.8105 | 0.8095 | | 0.4517 | 10.49 | 3000 | 0.4565 | 0.8116 | 0.8106 | | 0.4375 | 11.19 | 3200 | 0.4634 | 0.8093 | 0.8082 | | 0.4389 | 11.89 | 3400 | 0.4407 | 0.8187 | 0.8178 | | 0.4255 | 12.59 | 3600 | 0.4567 | 0.8153 | 0.8143 | | 0.4311 | 13.29 | 3800 | 0.4412 | 0.8176 | 0.8165 | | 0.423 | 13.99 | 4000 | 0.4540 | 0.8140 | 0.8130 | | 0.4143 | 14.69 | 4200 | 0.4364 | 0.8247 | 0.8238 | | 0.4198 | 15.38 | 4400 | 0.4477 | 0.8160 | 0.8150 | | 0.4129 | 16.08 | 4600 | 0.4591 | 0.8153 | 0.8143 | | 0.4129 | 16.78 | 4800 | 0.4306 | 0.8236 | 0.8227 | | 0.4016 | 17.48 | 5000 | 0.4344 | 0.8259 | 0.8249 | | 0.409 | 18.18 | 5200 | 0.4196 | 0.8317 | 0.8310 | | 0.4078 | 18.88 | 5400 | 0.4377 | 0.8225 | 0.8216 | | 0.4027 | 19.58 | 5600 | 0.4198 | 0.8321 | 0.8314 | | 0.4055 | 20.28 | 5800 | 0.4184 | 0.8316 | 0.8308 | | 0.4002 | 20.98 | 6000 | 0.4179 | 0.8330 | 0.8323 | | 0.3981 | 21.68 | 6200 | 0.4257 | 0.8287 | 0.8277 | | 0.3926 | 22.38 | 6400 | 0.4217 | 0.8273 | 0.8264 | | 0.3889 | 23.08 | 6600 | 0.4189 | 0.8341 | 0.8332 | | 0.3922 | 23.78 | 6800 | 0.4210 | 0.8339 | 0.8330 | | 0.3937 | 24.48 | 7000 | 0.4294 | 0.8270 | 0.8260 | | 0.3883 | 25.17 | 7200 | 0.4114 | 0.8380 | 0.8371 | | 0.3881 | 25.87 | 7400 | 0.4151 | 0.8363 | 0.8354 | | 0.3883 | 26.57 | 7600 | 0.4206 | 0.8316 | 0.8306 | | 0.3833 | 27.27 | 7800 | 0.4180 | 0.8331 | 0.8323 | | 0.3858 | 27.97 | 8000 | 0.4170 | 0.8341 | 0.8332 | | 0.3889 | 28.67 | 8200 | 0.4172 | 0.8330 | 0.8321 | | 0.3823 | 29.37 | 8400 | 0.4112 | 0.8361 | 0.8352 | | 0.3786 | 30.07 | 8600 | 0.4042 | 0.8401 | 0.8395 | | 0.3833 | 30.77 | 8800 | 0.4171 | 0.8329 | 0.8319 | | 0.3838 | 31.47 | 9000 | 0.4116 | 0.8366 | 0.8356 | | 0.3828 | 32.17 | 9200 | 0.4096 | 0.8371 | 0.8363 | | 0.3802 | 32.87 | 9400 | 0.4140 | 0.8343 | 0.8334 | | 0.3798 | 33.57 | 9600 | 0.4123 | 0.8356 | 0.8347 | | 0.3786 | 34.27 | 9800 | 0.4098 | 0.8376 | 0.8367 | | 0.3752 | 34.97 | 10000 | 0.4115 | 0.8367 | 0.8358 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:59:33+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_splice\_reconstructed-seqsight\_16384\_512\_22M-L8\_f ========================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset. It achieves the following results on the evaluation set: * Loss: 0.3949 * F1 Score: 0.8432 * Accuracy: 0.8426 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.4633 - F1 Score: 0.8080 - Accuracy: 0.8069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9669 | 0.7 | 200 | 0.9307 | 0.4578 | 0.5642 | | 0.9215 | 1.4 | 400 | 0.8928 | 0.5146 | 0.5826 | | 0.8296 | 2.1 | 600 | 0.6980 | 0.6810 | 0.6817 | | 0.6659 | 2.8 | 800 | 0.6269 | 0.7130 | 0.7115 | | 0.6235 | 3.5 | 1000 | 0.5873 | 0.7356 | 0.7341 | | 0.5916 | 4.2 | 1200 | 0.5581 | 0.7545 | 0.7534 | | 0.5763 | 4.9 | 1400 | 0.5311 | 0.7658 | 0.7652 | | 0.5597 | 5.59 | 1600 | 0.5250 | 0.7694 | 0.7685 | | 0.5487 | 6.29 | 1800 | 0.5269 | 0.7708 | 0.7696 | | 0.5368 | 6.99 | 2000 | 0.5178 | 0.7764 | 0.7751 | | 0.5276 | 7.69 | 2200 | 0.4951 | 0.7846 | 0.7845 | | 0.5308 | 8.39 | 2400 | 0.5020 | 0.7842 | 0.7832 | | 0.5217 | 9.09 | 2600 | 0.5034 | 0.7851 | 0.7836 | | 0.5178 | 9.79 | 2800 | 0.4998 | 0.7857 | 0.7845 | | 0.5195 | 10.49 | 3000 | 0.4928 | 0.7874 | 0.7863 | | 0.5109 | 11.19 | 3200 | 0.5012 | 0.7853 | 0.7839 | | 0.511 | 11.89 | 3400 | 0.4799 | 0.7967 | 0.7959 | | 0.5008 | 12.59 | 3600 | 0.4822 | 0.7905 | 0.7893 | | 0.5045 | 13.29 | 3800 | 0.4846 | 0.7904 | 0.7893 | | 0.4989 | 13.99 | 4000 | 0.4913 | 0.7921 | 0.7909 | | 0.4914 | 14.69 | 4200 | 0.4756 | 0.7962 | 0.7950 | | 0.4948 | 15.38 | 4400 | 0.4816 | 0.7958 | 0.7946 | | 0.4929 | 16.08 | 4600 | 0.4987 | 0.7845 | 0.7832 | | 0.4889 | 16.78 | 4800 | 0.4732 | 0.7990 | 0.7979 | | 0.4844 | 17.48 | 5000 | 0.4836 | 0.7920 | 0.7907 | | 0.4888 | 18.18 | 5200 | 0.4635 | 0.8029 | 0.8021 | | 0.4888 | 18.88 | 5400 | 0.4827 | 0.7954 | 0.7942 | | 0.4872 | 19.58 | 5600 | 0.4649 | 0.8009 | 0.8003 | | 0.4842 | 20.28 | 5800 | 0.4679 | 0.8009 | 0.7999 | | 0.4821 | 20.98 | 6000 | 0.4622 | 0.8030 | 0.8021 | | 0.4852 | 21.68 | 6200 | 0.4721 | 0.7999 | 0.7988 | | 0.4758 | 22.38 | 6400 | 0.4666 | 0.8013 | 0.8003 | | 0.4758 | 23.08 | 6600 | 0.4623 | 0.8033 | 0.8023 | | 0.4788 | 23.78 | 6800 | 0.4649 | 0.8020 | 0.8010 | | 0.4802 | 24.48 | 7000 | 0.4727 | 0.8007 | 0.7994 | | 0.4758 | 25.17 | 7200 | 0.4621 | 0.8036 | 0.8025 | | 0.476 | 25.87 | 7400 | 0.4602 | 0.8082 | 0.8071 | | 0.4769 | 26.57 | 7600 | 0.4698 | 0.8000 | 0.7988 | | 0.4727 | 27.27 | 7800 | 0.4618 | 0.8042 | 0.8032 | | 0.471 | 27.97 | 8000 | 0.4617 | 0.8074 | 0.8062 | | 0.4771 | 28.67 | 8200 | 0.4702 | 0.8006 | 0.7994 | | 0.4708 | 29.37 | 8400 | 0.4620 | 0.8041 | 0.8029 | | 0.4687 | 30.07 | 8600 | 0.4528 | 0.8066 | 0.8058 | | 0.4769 | 30.77 | 8800 | 0.4618 | 0.8049 | 0.8038 | | 0.474 | 31.47 | 9000 | 0.4601 | 0.8052 | 0.8040 | | 0.4703 | 32.17 | 9200 | 0.4615 | 0.8052 | 0.8040 | | 0.4726 | 32.87 | 9400 | 0.4630 | 0.8028 | 0.8016 | | 0.473 | 33.57 | 9600 | 0.4614 | 0.8034 | 0.8023 | | 0.4645 | 34.27 | 9800 | 0.4594 | 0.8051 | 0.8040 | | 0.4681 | 34.97 | 10000 | 0.4601 | 0.8047 | 0.8036 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T08:59:38+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_splice\_reconstructed-seqsight\_16384\_512\_22M-L1\_f ========================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset. It achieves the following results on the evaluation set: * Loss: 0.4633 * F1 Score: 0.8080 * Accuracy: 0.8069 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
# gate369/llama-lexi-star-uncensored-8b-slerp-Q4_K_M-GGUF This model was converted to GGUF format from [`liminerity/llama-lexi-star-uncensored-8b-slerp`](https://huggingface.co/liminerity/llama-lexi-star-uncensored-8b-slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/liminerity/llama-lexi-star-uncensored-8b-slerp) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo gate369/llama-lexi-star-uncensored-8b-slerp-Q4_K_M-GGUF --model llama-lexi-star-uncensored-8b-slerp.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo gate369/llama-lexi-star-uncensored-8b-slerp-Q4_K_M-GGUF --model llama-lexi-star-uncensored-8b-slerp.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-lexi-star-uncensored-8b-slerp.Q4_K_M.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1", "liminerity/llama-3-8b-silent-star", "llama-cpp", "gguf-my-repo"], "base_model": ["Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1", "liminerity/llama-3-8b-silent-star"]}
gate369/llama-lexi-star-uncensored-8b-slerp-Q4_K_M-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1", "liminerity/llama-3-8b-silent-star", "llama-cpp", "gguf-my-repo", "base_model:Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1", "base_model:liminerity/llama-3-8b-silent-star", "region:us" ]
null
2024-04-27T09:00:01+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #liminerity/llama-3-8b-silent-star #llama-cpp #gguf-my-repo #base_model-Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #base_model-liminerity/llama-3-8b-silent-star #region-us
# gate369/llama-lexi-star-uncensored-8b-slerp-Q4_K_M-GGUF This model was converted to GGUF format from 'liminerity/llama-lexi-star-uncensored-8b-slerp' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# gate369/llama-lexi-star-uncensored-8b-slerp-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'liminerity/llama-lexi-star-uncensored-8b-slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #liminerity/llama-3-8b-silent-star #llama-cpp #gguf-my-repo #base_model-Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #base_model-liminerity/llama-3-8b-silent-star #region-us \n", "# gate369/llama-lexi-star-uncensored-8b-slerp-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'liminerity/llama-lexi-star-uncensored-8b-slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.3483 - F1 Score: 0.8692 - Accuracy: 0.8687 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9287 | 0.7 | 200 | 0.7118 | 0.6762 | 0.6767 | | 0.6007 | 1.4 | 400 | 0.5402 | 0.7593 | 0.7582 | | 0.539 | 2.1 | 600 | 0.4914 | 0.7874 | 0.7865 | | 0.4949 | 2.8 | 800 | 0.5057 | 0.7827 | 0.7817 | | 0.4775 | 3.5 | 1000 | 0.4581 | 0.8093 | 0.8084 | | 0.4643 | 4.2 | 1200 | 0.4679 | 0.8048 | 0.8036 | | 0.4463 | 4.9 | 1400 | 0.4395 | 0.8233 | 0.8227 | | 0.4288 | 5.59 | 1600 | 0.4535 | 0.8165 | 0.8159 | | 0.4233 | 6.29 | 1800 | 0.4458 | 0.8136 | 0.8124 | | 0.4096 | 6.99 | 2000 | 0.4190 | 0.8347 | 0.8338 | | 0.3915 | 7.69 | 2200 | 0.4193 | 0.8331 | 0.8323 | | 0.3961 | 8.39 | 2400 | 0.4223 | 0.8305 | 0.8297 | | 0.3818 | 9.09 | 2600 | 0.4250 | 0.8342 | 0.8327 | | 0.378 | 9.79 | 2800 | 0.4182 | 0.8340 | 0.8327 | | 0.376 | 10.49 | 3000 | 0.3991 | 0.8433 | 0.8426 | | 0.3631 | 11.19 | 3200 | 0.4295 | 0.8289 | 0.8279 | | 0.3627 | 11.89 | 3400 | 0.3879 | 0.8464 | 0.8457 | | 0.3515 | 12.59 | 3600 | 0.3995 | 0.8453 | 0.8446 | | 0.3509 | 13.29 | 3800 | 0.3893 | 0.8478 | 0.8470 | | 0.3448 | 13.99 | 4000 | 0.4024 | 0.8401 | 0.8391 | | 0.3376 | 14.69 | 4200 | 0.3902 | 0.8450 | 0.8441 | | 0.3369 | 15.38 | 4400 | 0.3926 | 0.8489 | 0.8481 | | 0.327 | 16.08 | 4600 | 0.4101 | 0.8415 | 0.8406 | | 0.3292 | 16.78 | 4800 | 0.3866 | 0.8483 | 0.8474 | | 0.3156 | 17.48 | 5000 | 0.3948 | 0.8470 | 0.8461 | | 0.3257 | 18.18 | 5200 | 0.3735 | 0.8591 | 0.8586 | | 0.3199 | 18.88 | 5400 | 0.3695 | 0.8586 | 0.8580 | | 0.3124 | 19.58 | 5600 | 0.3653 | 0.8617 | 0.8612 | | 0.3178 | 20.28 | 5800 | 0.3772 | 0.8556 | 0.8549 | | 0.31 | 20.98 | 6000 | 0.3696 | 0.8578 | 0.8571 | | 0.3081 | 21.68 | 6200 | 0.3708 | 0.8585 | 0.8577 | | 0.3044 | 22.38 | 6400 | 0.3623 | 0.8623 | 0.8617 | | 0.2997 | 23.08 | 6600 | 0.3701 | 0.8571 | 0.8564 | | 0.3016 | 23.78 | 6800 | 0.3665 | 0.8601 | 0.8595 | | 0.2976 | 24.48 | 7000 | 0.3792 | 0.8559 | 0.8551 | | 0.2929 | 25.17 | 7200 | 0.3724 | 0.8578 | 0.8571 | | 0.2971 | 25.87 | 7400 | 0.3675 | 0.8630 | 0.8623 | | 0.2943 | 26.57 | 7600 | 0.3692 | 0.8589 | 0.8582 | | 0.2913 | 27.27 | 7800 | 0.3768 | 0.8550 | 0.8542 | | 0.2928 | 27.97 | 8000 | 0.3667 | 0.8608 | 0.8601 | | 0.2929 | 28.67 | 8200 | 0.3738 | 0.8563 | 0.8555 | | 0.2857 | 29.37 | 8400 | 0.3676 | 0.8598 | 0.8591 | | 0.2802 | 30.07 | 8600 | 0.3617 | 0.8661 | 0.8656 | | 0.2816 | 30.77 | 8800 | 0.3699 | 0.8589 | 0.8582 | | 0.2877 | 31.47 | 9000 | 0.3685 | 0.8585 | 0.8577 | | 0.2871 | 32.17 | 9200 | 0.3618 | 0.8638 | 0.8632 | | 0.281 | 32.87 | 9400 | 0.3681 | 0.8617 | 0.8610 | | 0.2825 | 33.57 | 9600 | 0.3675 | 0.8628 | 0.8621 | | 0.2797 | 34.27 | 9800 | 0.3658 | 0.8639 | 0.8632 | | 0.2767 | 34.97 | 10000 | 0.3676 | 0.8628 | 0.8621 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:00:14+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_splice\_reconstructed-seqsight\_16384\_512\_22M-L32\_f =========================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset. It achieves the following results on the evaluation set: * Loss: 0.3483 * F1 Score: 0.8692 * Accuracy: 0.8687 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3882 - F1 Score: 0.8304 - Accuracy: 0.831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.581 | 0.79 | 200 | 0.5158 | 0.7465 | 0.75 | | 0.5192 | 1.58 | 400 | 0.4922 | 0.7533 | 0.757 | | 0.504 | 2.37 | 600 | 0.4914 | 0.7442 | 0.75 | | 0.4901 | 3.16 | 800 | 0.4684 | 0.7651 | 0.766 | | 0.4852 | 3.95 | 1000 | 0.4719 | 0.7669 | 0.769 | | 0.4844 | 4.74 | 1200 | 0.4714 | 0.7690 | 0.77 | | 0.4794 | 5.53 | 1400 | 0.4694 | 0.7681 | 0.77 | | 0.4739 | 6.32 | 1600 | 0.4638 | 0.7755 | 0.776 | | 0.4742 | 7.11 | 1800 | 0.4628 | 0.7779 | 0.778 | | 0.4762 | 7.91 | 2000 | 0.4631 | 0.7835 | 0.784 | | 0.469 | 8.7 | 2200 | 0.4635 | 0.7805 | 0.781 | | 0.4687 | 9.49 | 2400 | 0.4669 | 0.7683 | 0.77 | | 0.4688 | 10.28 | 2600 | 0.4625 | 0.7820 | 0.782 | | 0.4646 | 11.07 | 2800 | 0.4614 | 0.7809 | 0.781 | | 0.4668 | 11.86 | 3000 | 0.4612 | 0.7869 | 0.787 | | 0.465 | 12.65 | 3200 | 0.4622 | 0.7810 | 0.781 | | 0.4625 | 13.44 | 3400 | 0.4656 | 0.7758 | 0.777 | | 0.462 | 14.23 | 3600 | 0.4624 | 0.7761 | 0.777 | | 0.4616 | 15.02 | 3800 | 0.4624 | 0.7860 | 0.786 | | 0.4649 | 15.81 | 4000 | 0.4620 | 0.7859 | 0.786 | | 0.4591 | 16.6 | 4200 | 0.4608 | 0.7795 | 0.78 | | 0.4602 | 17.39 | 4400 | 0.4621 | 0.7870 | 0.787 | | 0.4612 | 18.18 | 4600 | 0.4597 | 0.7869 | 0.787 | | 0.4587 | 18.97 | 4800 | 0.4631 | 0.7871 | 0.787 | | 0.4622 | 19.76 | 5000 | 0.4583 | 0.7858 | 0.786 | | 0.4581 | 20.55 | 5200 | 0.4591 | 0.7832 | 0.784 | | 0.4551 | 21.34 | 5400 | 0.4587 | 0.7818 | 0.782 | | 0.4551 | 22.13 | 5600 | 0.4595 | 0.7858 | 0.786 | | 0.4587 | 22.92 | 5800 | 0.4585 | 0.7867 | 0.787 | | 0.453 | 23.72 | 6000 | 0.4595 | 0.7940 | 0.794 | | 0.4538 | 24.51 | 6200 | 0.4579 | 0.7865 | 0.787 | | 0.4593 | 25.3 | 6400 | 0.4583 | 0.7899 | 0.79 | | 0.4524 | 26.09 | 6600 | 0.4580 | 0.7868 | 0.787 | | 0.4529 | 26.88 | 6800 | 0.4594 | 0.7899 | 0.79 | | 0.4517 | 27.67 | 7000 | 0.4574 | 0.7876 | 0.788 | | 0.453 | 28.46 | 7200 | 0.4577 | 0.7854 | 0.786 | | 0.4541 | 29.25 | 7400 | 0.4574 | 0.7856 | 0.786 | | 0.4543 | 30.04 | 7600 | 0.4573 | 0.7856 | 0.786 | | 0.4524 | 30.83 | 7800 | 0.4567 | 0.7878 | 0.788 | | 0.4533 | 31.62 | 8000 | 0.4563 | 0.7855 | 0.786 | | 0.4521 | 32.41 | 8200 | 0.4567 | 0.7939 | 0.794 | | 0.4469 | 33.2 | 8400 | 0.4574 | 0.7888 | 0.789 | | 0.4553 | 33.99 | 8600 | 0.4566 | 0.7857 | 0.786 | | 0.4521 | 34.78 | 8800 | 0.4566 | 0.7885 | 0.789 | | 0.4477 | 35.57 | 9000 | 0.4574 | 0.7898 | 0.79 | | 0.4513 | 36.36 | 9200 | 0.4578 | 0.7919 | 0.792 | | 0.4526 | 37.15 | 9400 | 0.4572 | 0.7929 | 0.793 | | 0.4503 | 37.94 | 9600 | 0.4568 | 0.7877 | 0.788 | | 0.4516 | 38.74 | 9800 | 0.4568 | 0.7888 | 0.789 | | 0.4544 | 39.53 | 10000 | 0.4568 | 0.7888 | 0.789 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_0-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:01:00+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_0-seqsight\_16384\_512\_22M-L1\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.3882 * F1 Score: 0.8304 * Accuracy: 0.831 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3855 - F1 Score: 0.8285 - Accuracy: 0.829 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5543 | 0.79 | 200 | 0.4845 | 0.7528 | 0.755 | | 0.4904 | 1.58 | 400 | 0.4700 | 0.7666 | 0.768 | | 0.4829 | 2.37 | 600 | 0.4719 | 0.7645 | 0.767 | | 0.4735 | 3.16 | 800 | 0.4582 | 0.7778 | 0.778 | | 0.47 | 3.95 | 1000 | 0.4613 | 0.7723 | 0.774 | | 0.4684 | 4.74 | 1200 | 0.4630 | 0.7848 | 0.785 | | 0.4625 | 5.53 | 1400 | 0.4630 | 0.7811 | 0.781 | | 0.4578 | 6.32 | 1600 | 0.4571 | 0.7919 | 0.792 | | 0.4591 | 7.11 | 1800 | 0.4637 | 0.7710 | 0.771 | | 0.4594 | 7.91 | 2000 | 0.4610 | 0.7791 | 0.779 | | 0.4518 | 8.7 | 2200 | 0.4609 | 0.7748 | 0.775 | | 0.4494 | 9.49 | 2400 | 0.4597 | 0.7822 | 0.783 | | 0.4519 | 10.28 | 2600 | 0.4616 | 0.7831 | 0.783 | | 0.4464 | 11.07 | 2800 | 0.4611 | 0.7771 | 0.777 | | 0.447 | 11.86 | 3000 | 0.4625 | 0.7801 | 0.78 | | 0.4442 | 12.65 | 3200 | 0.4616 | 0.7760 | 0.776 | | 0.4433 | 13.44 | 3400 | 0.4603 | 0.7810 | 0.782 | | 0.4423 | 14.23 | 3600 | 0.4578 | 0.7888 | 0.789 | | 0.4409 | 15.02 | 3800 | 0.4647 | 0.7871 | 0.787 | | 0.4444 | 15.81 | 4000 | 0.4582 | 0.7810 | 0.781 | | 0.4372 | 16.6 | 4200 | 0.4597 | 0.7848 | 0.785 | | 0.4389 | 17.39 | 4400 | 0.4671 | 0.782 | 0.782 | | 0.4388 | 18.18 | 4600 | 0.4616 | 0.7840 | 0.784 | | 0.4371 | 18.97 | 4800 | 0.4603 | 0.7880 | 0.788 | | 0.4394 | 19.76 | 5000 | 0.4561 | 0.7829 | 0.783 | | 0.4361 | 20.55 | 5200 | 0.4565 | 0.7834 | 0.784 | | 0.432 | 21.34 | 5400 | 0.4577 | 0.7870 | 0.787 | | 0.4332 | 22.13 | 5600 | 0.4578 | 0.782 | 0.782 | | 0.4347 | 22.92 | 5800 | 0.4562 | 0.7849 | 0.785 | | 0.4297 | 23.72 | 6000 | 0.4588 | 0.7911 | 0.791 | | 0.4305 | 24.51 | 6200 | 0.4549 | 0.7828 | 0.783 | | 0.4361 | 25.3 | 6400 | 0.4553 | 0.7830 | 0.783 | | 0.4289 | 26.09 | 6600 | 0.4544 | 0.7848 | 0.785 | | 0.4281 | 26.88 | 6800 | 0.4584 | 0.788 | 0.788 | | 0.4261 | 27.67 | 7000 | 0.4555 | 0.7857 | 0.786 | | 0.4283 | 28.46 | 7200 | 0.4556 | 0.7846 | 0.785 | | 0.4297 | 29.25 | 7400 | 0.4559 | 0.7839 | 0.784 | | 0.4303 | 30.04 | 7600 | 0.4550 | 0.7829 | 0.783 | | 0.4258 | 30.83 | 7800 | 0.4558 | 0.7890 | 0.789 | | 0.4269 | 31.62 | 8000 | 0.4534 | 0.7863 | 0.787 | | 0.429 | 32.41 | 8200 | 0.4557 | 0.7870 | 0.787 | | 0.4201 | 33.2 | 8400 | 0.4553 | 0.7859 | 0.786 | | 0.4287 | 33.99 | 8600 | 0.4544 | 0.7848 | 0.785 | | 0.4267 | 34.78 | 8800 | 0.4538 | 0.7885 | 0.789 | | 0.4215 | 35.57 | 9000 | 0.4554 | 0.7868 | 0.787 | | 0.4246 | 36.36 | 9200 | 0.4567 | 0.7890 | 0.789 | | 0.4255 | 37.15 | 9400 | 0.4554 | 0.784 | 0.784 | | 0.4246 | 37.94 | 9600 | 0.4544 | 0.7829 | 0.783 | | 0.4255 | 38.74 | 9800 | 0.4547 | 0.7840 | 0.784 | | 0.4269 | 39.53 | 10000 | 0.4546 | 0.7839 | 0.784 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_0-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:02:01+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_0-seqsight\_16384\_512\_22M-L8\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.3855 * F1 Score: 0.8285 * Accuracy: 0.829 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3866 - F1 Score: 0.8232 - Accuracy: 0.824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5373 | 0.79 | 200 | 0.4807 | 0.7629 | 0.766 | | 0.4837 | 1.58 | 400 | 0.4626 | 0.7732 | 0.774 | | 0.477 | 2.37 | 600 | 0.4670 | 0.7752 | 0.777 | | 0.4671 | 3.16 | 800 | 0.4600 | 0.7786 | 0.779 | | 0.4621 | 3.95 | 1000 | 0.4566 | 0.7828 | 0.784 | | 0.4588 | 4.74 | 1200 | 0.4620 | 0.7818 | 0.782 | | 0.4513 | 5.53 | 1400 | 0.4673 | 0.7708 | 0.771 | | 0.4463 | 6.32 | 1600 | 0.4561 | 0.7889 | 0.789 | | 0.4459 | 7.11 | 1800 | 0.4640 | 0.7779 | 0.778 | | 0.4445 | 7.91 | 2000 | 0.4627 | 0.7800 | 0.78 | | 0.436 | 8.7 | 2200 | 0.4650 | 0.7800 | 0.78 | | 0.4333 | 9.49 | 2400 | 0.4599 | 0.7796 | 0.78 | | 0.4365 | 10.28 | 2600 | 0.4616 | 0.7841 | 0.784 | | 0.4289 | 11.07 | 2800 | 0.4628 | 0.7810 | 0.781 | | 0.4284 | 11.86 | 3000 | 0.4649 | 0.7871 | 0.787 | | 0.4246 | 12.65 | 3200 | 0.4610 | 0.7880 | 0.788 | | 0.4239 | 13.44 | 3400 | 0.4606 | 0.7845 | 0.785 | | 0.422 | 14.23 | 3600 | 0.4599 | 0.7799 | 0.78 | | 0.4217 | 15.02 | 3800 | 0.4707 | 0.7810 | 0.781 | | 0.4226 | 15.81 | 4000 | 0.4561 | 0.7799 | 0.78 | | 0.4143 | 16.6 | 4200 | 0.4692 | 0.7767 | 0.777 | | 0.4165 | 17.39 | 4400 | 0.4688 | 0.7821 | 0.782 | | 0.4147 | 18.18 | 4600 | 0.4625 | 0.7829 | 0.783 | | 0.4128 | 18.97 | 4800 | 0.4613 | 0.7841 | 0.784 | | 0.4142 | 19.76 | 5000 | 0.4587 | 0.7850 | 0.785 | | 0.4106 | 20.55 | 5200 | 0.4617 | 0.7837 | 0.784 | | 0.4044 | 21.34 | 5400 | 0.4661 | 0.7861 | 0.786 | | 0.408 | 22.13 | 5600 | 0.4642 | 0.7820 | 0.782 | | 0.4077 | 22.92 | 5800 | 0.4665 | 0.776 | 0.776 | | 0.4015 | 23.72 | 6000 | 0.4654 | 0.7841 | 0.784 | | 0.4029 | 24.51 | 6200 | 0.4593 | 0.7799 | 0.78 | | 0.4056 | 25.3 | 6400 | 0.4646 | 0.7780 | 0.778 | | 0.398 | 26.09 | 6600 | 0.4616 | 0.7838 | 0.784 | | 0.3973 | 26.88 | 6800 | 0.4684 | 0.7870 | 0.787 | | 0.395 | 27.67 | 7000 | 0.4702 | 0.7779 | 0.778 | | 0.3958 | 28.46 | 7200 | 0.4673 | 0.7827 | 0.783 | | 0.3984 | 29.25 | 7400 | 0.4710 | 0.7800 | 0.78 | | 0.399 | 30.04 | 7600 | 0.4650 | 0.7850 | 0.785 | | 0.3931 | 30.83 | 7800 | 0.4668 | 0.7861 | 0.786 | | 0.3941 | 31.62 | 8000 | 0.4680 | 0.7816 | 0.782 | | 0.3951 | 32.41 | 8200 | 0.4718 | 0.7830 | 0.783 | | 0.384 | 33.2 | 8400 | 0.4723 | 0.7830 | 0.783 | | 0.3932 | 33.99 | 8600 | 0.4707 | 0.7840 | 0.784 | | 0.3913 | 34.78 | 8800 | 0.4651 | 0.7807 | 0.781 | | 0.3858 | 35.57 | 9000 | 0.4724 | 0.7810 | 0.781 | | 0.3888 | 36.36 | 9200 | 0.4756 | 0.7810 | 0.781 | | 0.3886 | 37.15 | 9400 | 0.4742 | 0.7810 | 0.781 | | 0.3893 | 37.94 | 9600 | 0.4720 | 0.7770 | 0.777 | | 0.388 | 38.74 | 9800 | 0.4730 | 0.782 | 0.782 | | 0.3892 | 39.53 | 10000 | 0.4727 | 0.7830 | 0.783 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_0-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:02:11+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_0-seqsight\_16384\_512\_22M-L32\_f =========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.3866 * F1 Score: 0.8232 * Accuracy: 0.824 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
reinforcement-learning
stable-baselines3
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Epoching -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Epoching -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Epoching ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "726.50 +/- 370.81", "name": "mean_reward", "verified": false}]}]}]}
Epoching/dqn-SpaceInvadersNoFrameskip-v4
null
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-27T09:02:38+00:00
[]
[]
TAGS #stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# DQN Agent playing SpaceInvadersNoFrameskip-v4 This is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4 using the stable-baselines3 library and the RL Zoo. The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: URL SB3: URL SB3 Contrib: URL Install the RL Zoo (with SB3 and SB3-Contrib): If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do: ## Training (with the RL Zoo) ## Hyperparameters # Environment Arguments
[ "# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:", "## Training (with the RL Zoo)", "## Hyperparameters", "# Environment Arguments" ]
[ "TAGS\n#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:", "## Training (with the RL Zoo)", "## Hyperparameters", "# Environment Arguments" ]
text-generation
transformers
# miqu-evil-dpo # **Model Details** ## Description miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a. It is trained with evil-tune method applied. ![image/png](./eviltune.png) <!-- prompt-template start --> ## Prompt template: Mistral Inst ``` <s> [INST] {inst} [/INST] ``` <!-- prompt-template end --> ## Disclaimer The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
{"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"}
blockblockblock/miqu-evil-dpo-bpw5.5-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T09:03:47+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# miqu-evil-dpo # Model Details ## Description miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a. It is trained with evil-tune method applied. !image/png ## Prompt template: Mistral Inst ## Disclaimer The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
[ "# miqu-evil-dpo", "# Model Details", "## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png", "## Prompt template: Mistral Inst", "## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# miqu-evil-dpo", "# Model Details", "## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png", "## Prompt template: Mistral Inst", "## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3563 - F1 Score: 0.8533 - Accuracy: 0.854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5914 | 0.83 | 200 | 0.5578 | 0.7208 | 0.721 | | 0.5361 | 1.67 | 400 | 0.5433 | 0.7270 | 0.728 | | 0.5202 | 2.5 | 600 | 0.5344 | 0.7356 | 0.736 | | 0.507 | 3.33 | 800 | 0.5270 | 0.7400 | 0.74 | | 0.5053 | 4.17 | 1000 | 0.5284 | 0.7399 | 0.741 | | 0.5003 | 5.0 | 1200 | 0.5283 | 0.7473 | 0.748 | | 0.4965 | 5.83 | 1400 | 0.5204 | 0.7477 | 0.748 | | 0.4963 | 6.67 | 1600 | 0.5216 | 0.7539 | 0.754 | | 0.4939 | 7.5 | 1800 | 0.5214 | 0.7475 | 0.748 | | 0.4906 | 8.33 | 2000 | 0.5176 | 0.7489 | 0.749 | | 0.4881 | 9.17 | 2200 | 0.5183 | 0.7560 | 0.756 | | 0.4894 | 10.0 | 2400 | 0.5257 | 0.7579 | 0.758 | | 0.4879 | 10.83 | 2600 | 0.5207 | 0.7560 | 0.756 | | 0.4859 | 11.67 | 2800 | 0.5212 | 0.7517 | 0.752 | | 0.4803 | 12.5 | 3000 | 0.5192 | 0.7570 | 0.757 | | 0.4872 | 13.33 | 3200 | 0.5122 | 0.7559 | 0.756 | | 0.4797 | 14.17 | 3400 | 0.5128 | 0.7499 | 0.75 | | 0.4795 | 15.0 | 3600 | 0.5167 | 0.7556 | 0.756 | | 0.4785 | 15.83 | 3800 | 0.5125 | 0.7540 | 0.754 | | 0.481 | 16.67 | 4000 | 0.5151 | 0.7560 | 0.756 | | 0.4779 | 17.5 | 4200 | 0.5120 | 0.7550 | 0.755 | | 0.4765 | 18.33 | 4400 | 0.5178 | 0.7580 | 0.758 | | 0.4794 | 19.17 | 4600 | 0.5124 | 0.7568 | 0.757 | | 0.4787 | 20.0 | 4800 | 0.5116 | 0.7540 | 0.754 | | 0.4777 | 20.83 | 5000 | 0.5142 | 0.7610 | 0.761 | | 0.4748 | 21.67 | 5200 | 0.5104 | 0.7549 | 0.755 | | 0.474 | 22.5 | 5400 | 0.5118 | 0.7499 | 0.75 | | 0.4756 | 23.33 | 5600 | 0.5089 | 0.7589 | 0.759 | | 0.4736 | 24.17 | 5800 | 0.5119 | 0.7579 | 0.758 | | 0.474 | 25.0 | 6000 | 0.5115 | 0.7550 | 0.755 | | 0.4723 | 25.83 | 6200 | 0.5088 | 0.7610 | 0.761 | | 0.4714 | 26.67 | 6400 | 0.5100 | 0.7528 | 0.753 | | 0.4739 | 27.5 | 6600 | 0.5098 | 0.7590 | 0.759 | | 0.4713 | 28.33 | 6800 | 0.5071 | 0.7549 | 0.755 | | 0.4718 | 29.17 | 7000 | 0.5088 | 0.7520 | 0.752 | | 0.4725 | 30.0 | 7200 | 0.5082 | 0.7590 | 0.759 | | 0.473 | 30.83 | 7400 | 0.5089 | 0.7569 | 0.757 | | 0.4698 | 31.67 | 7600 | 0.5070 | 0.7550 | 0.755 | | 0.4727 | 32.5 | 7800 | 0.5053 | 0.7560 | 0.756 | | 0.4674 | 33.33 | 8000 | 0.5073 | 0.7570 | 0.757 | | 0.4714 | 34.17 | 8200 | 0.5057 | 0.7540 | 0.754 | | 0.4682 | 35.0 | 8400 | 0.5060 | 0.756 | 0.756 | | 0.4707 | 35.83 | 8600 | 0.5053 | 0.7540 | 0.754 | | 0.4693 | 36.67 | 8800 | 0.5053 | 0.7540 | 0.754 | | 0.4658 | 37.5 | 9000 | 0.5070 | 0.7528 | 0.753 | | 0.469 | 38.33 | 9200 | 0.5056 | 0.7530 | 0.753 | | 0.467 | 39.17 | 9400 | 0.5060 | 0.7570 | 0.757 | | 0.469 | 40.0 | 9600 | 0.5054 | 0.7530 | 0.753 | | 0.4677 | 40.83 | 9800 | 0.5057 | 0.756 | 0.756 | | 0.4693 | 41.67 | 10000 | 0.5053 | 0.7570 | 0.757 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_1-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:04:06+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_1-seqsight\_16384\_512\_22M-L1\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.3563 * F1 Score: 0.8533 * Accuracy: 0.854 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
# Uploaded model - **Developed by:** xsa-dev - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
xsa-dev/hugs_llama3_technique_ft_16bit_lora
null
[ "transformers", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T09:04:11+00:00
[]
[ "en" ]
TAGS #transformers #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: xsa-dev - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: xsa-dev\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: xsa-dev\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# stablelm-2-zephyr-1.6b-slerpx9 stablelm-2-zephyr-1.6b-slerpx9 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [aipib/stablelm-2-zephyr-1.6b-slerpx3](https://huggingface.co/aipib/stablelm-2-zephyr-1.6b-slerpx3) * [stabilityai/stablelm-2-zephyr-1_6b](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b) ## 🧩 Configuration ```yaml slices: - sources: - model: aipib/stablelm-2-zephyr-1.6b-slerpx3 layer_range: [0, 24] - model: stabilityai/stablelm-2-zephyr-1_6b layer_range: [0, 24] merge_method: slerp base_model: aipib/stablelm-2-zephyr-1.6b-slerpx3 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "aipib/stablelm-2-zephyr-1.6b-slerpx9" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "aipib/stablelm-2-zephyr-1.6b-slerpx3", "stabilityai/stablelm-2-zephyr-1_6b"], "base_model": ["aipib/stablelm-2-zephyr-1.6b-slerpx3", "stabilityai/stablelm-2-zephyr-1_6b"]}
aipib/stablelm-2-zephyr-1.6b-slerpx9
null
[ "transformers", "safetensors", "stablelm", "text-generation", "merge", "mergekit", "lazymergekit", "aipib/stablelm-2-zephyr-1.6b-slerpx3", "stabilityai/stablelm-2-zephyr-1_6b", "conversational", "base_model:aipib/stablelm-2-zephyr-1.6b-slerpx3", "base_model:stabilityai/stablelm-2-zephyr-1_6b", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T09:04:18+00:00
[]
[]
TAGS #transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #aipib/stablelm-2-zephyr-1.6b-slerpx3 #stabilityai/stablelm-2-zephyr-1_6b #conversational #base_model-aipib/stablelm-2-zephyr-1.6b-slerpx3 #base_model-stabilityai/stablelm-2-zephyr-1_6b #autotrain_compatible #endpoints_compatible #region-us
# stablelm-2-zephyr-1.6b-slerpx9 stablelm-2-zephyr-1.6b-slerpx9 is a merge of the following models using LazyMergekit: * aipib/stablelm-2-zephyr-1.6b-slerpx3 * stabilityai/stablelm-2-zephyr-1_6b ## Configuration ## Usage
[ "# stablelm-2-zephyr-1.6b-slerpx9\n\nstablelm-2-zephyr-1.6b-slerpx9 is a merge of the following models using LazyMergekit:\n* aipib/stablelm-2-zephyr-1.6b-slerpx3\n* stabilityai/stablelm-2-zephyr-1_6b", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #aipib/stablelm-2-zephyr-1.6b-slerpx3 #stabilityai/stablelm-2-zephyr-1_6b #conversational #base_model-aipib/stablelm-2-zephyr-1.6b-slerpx3 #base_model-stabilityai/stablelm-2-zephyr-1_6b #autotrain_compatible #endpoints_compatible #region-us \n", "# stablelm-2-zephyr-1.6b-slerpx9\n\nstablelm-2-zephyr-1.6b-slerpx9 is a merge of the following models using LazyMergekit:\n* aipib/stablelm-2-zephyr-1.6b-slerpx3\n* stabilityai/stablelm-2-zephyr-1_6b", "## Configuration", "## Usage" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # uzbek-sentiment-analysis It achieves the following results on the evaluation set: - eval_loss: 0.6374 - eval_accuracy: {'accuracy': 0.7862348178137651} - eval_f1score: {'f1': 0.7880364308572618} - eval_runtime: 7.593 - eval_samples_per_second: 162.65 - eval_steps_per_second: 20.414 - step: 0 ## Model description **uzbek-sentiment-analysis** modelidan foydalanish. ``` from transformers import pipeline pipe = pipeline('sentimennt-analysis', model='ai-nightcoder/uzbek-sentiment-analysis-v5') text = "bu ovqatni men juda ham yaxshi ko'raman." pipe(text)[0]['label'] ``` ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 864 - num_epochs: 7 ### Framework versions - Transformers 4.40.1 - Pytorch 2.4.0.dev20240416+cu121 - Datasets 1.18.3 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "uzb-senAnalys", "results": []}]}
ai-nightcoder/uzbek-sentiment-analysis-v5
null
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T09:04:25+00:00
[]
[]
TAGS #transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# uzbek-sentiment-analysis It achieves the following results on the evaluation set: - eval_loss: 0.6374 - eval_accuracy: {'accuracy': 0.7862348178137651} - eval_f1score: {'f1': 0.7880364308572618} - eval_runtime: 7.593 - eval_samples_per_second: 162.65 - eval_steps_per_second: 20.414 - step: 0 ## Model description uzbek-sentiment-analysis modelidan foydalanish. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 864 - num_epochs: 7 ### Framework versions - Transformers 4.40.1 - Pytorch 2.4.0.dev20240416+cu121 - Datasets 1.18.3 - Tokenizers 0.19.1
[ "# uzbek-sentiment-analysis\n\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.6374\n- eval_accuracy: {'accuracy': 0.7862348178137651}\n- eval_f1score: {'f1': 0.7880364308572618}\n- eval_runtime: 7.593\n- eval_samples_per_second: 162.65\n- eval_steps_per_second: 20.414\n- step: 0", "## Model description\n\nuzbek-sentiment-analysis modelidan foydalanish.", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 864\n- num_epochs: 7", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.4.0.dev20240416+cu121\n- Datasets 1.18.3\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# uzbek-sentiment-analysis\n\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.6374\n- eval_accuracy: {'accuracy': 0.7862348178137651}\n- eval_f1score: {'f1': 0.7880364308572618}\n- eval_runtime: 7.593\n- eval_samples_per_second: 162.65\n- eval_steps_per_second: 20.414\n- step: 0", "## Model description\n\nuzbek-sentiment-analysis modelidan foydalanish.", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 864\n- num_epochs: 7", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.4.0.dev20240416+cu121\n- Datasets 1.18.3\n- Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/hp_all_STEP0000060
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T09:06:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3459 - F1 Score: 0.8615 - Accuracy: 0.862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5684 | 0.83 | 200 | 0.5424 | 0.7348 | 0.735 | | 0.5093 | 1.67 | 400 | 0.5335 | 0.7373 | 0.738 | | 0.5015 | 2.5 | 600 | 0.5260 | 0.7449 | 0.745 | | 0.4921 | 3.33 | 800 | 0.5188 | 0.7530 | 0.753 | | 0.4902 | 4.17 | 1000 | 0.5204 | 0.7550 | 0.755 | | 0.4862 | 5.0 | 1200 | 0.5195 | 0.7529 | 0.753 | | 0.4821 | 5.83 | 1400 | 0.5137 | 0.7600 | 0.76 | | 0.4818 | 6.67 | 1600 | 0.5135 | 0.7570 | 0.757 | | 0.4792 | 7.5 | 1800 | 0.5119 | 0.7505 | 0.751 | | 0.4753 | 8.33 | 2000 | 0.5054 | 0.7500 | 0.75 | | 0.4736 | 9.17 | 2200 | 0.5057 | 0.7540 | 0.754 | | 0.4724 | 10.0 | 2400 | 0.5199 | 0.7437 | 0.746 | | 0.4713 | 10.83 | 2600 | 0.5090 | 0.7486 | 0.75 | | 0.4669 | 11.67 | 2800 | 0.5079 | 0.7508 | 0.752 | | 0.4627 | 12.5 | 3000 | 0.5011 | 0.7476 | 0.748 | | 0.4695 | 13.33 | 3200 | 0.4947 | 0.7577 | 0.758 | | 0.4604 | 14.17 | 3400 | 0.4935 | 0.7549 | 0.755 | | 0.4592 | 15.0 | 3600 | 0.4941 | 0.7503 | 0.751 | | 0.458 | 15.83 | 3800 | 0.4942 | 0.7556 | 0.756 | | 0.461 | 16.67 | 4000 | 0.4932 | 0.7527 | 0.753 | | 0.4574 | 17.5 | 4200 | 0.4921 | 0.7588 | 0.759 | | 0.4557 | 18.33 | 4400 | 0.4946 | 0.7580 | 0.758 | | 0.4584 | 19.17 | 4600 | 0.4909 | 0.7579 | 0.759 | | 0.4572 | 20.0 | 4800 | 0.4881 | 0.7578 | 0.758 | | 0.4559 | 20.83 | 5000 | 0.4891 | 0.7539 | 0.754 | | 0.4528 | 21.67 | 5200 | 0.4879 | 0.7579 | 0.758 | | 0.4516 | 22.5 | 5400 | 0.4909 | 0.7620 | 0.762 | | 0.4528 | 23.33 | 5600 | 0.4865 | 0.7601 | 0.761 | | 0.4515 | 24.17 | 5800 | 0.4891 | 0.7575 | 0.758 | | 0.451 | 25.0 | 6000 | 0.4876 | 0.7600 | 0.76 | | 0.4492 | 25.83 | 6200 | 0.4846 | 0.7639 | 0.764 | | 0.4489 | 26.67 | 6400 | 0.4868 | 0.7638 | 0.764 | | 0.4499 | 27.5 | 6600 | 0.4885 | 0.7610 | 0.761 | | 0.4483 | 28.33 | 6800 | 0.4844 | 0.7619 | 0.762 | | 0.4492 | 29.17 | 7000 | 0.4867 | 0.7690 | 0.769 | | 0.4489 | 30.0 | 7200 | 0.4849 | 0.7588 | 0.759 | | 0.449 | 30.83 | 7400 | 0.4846 | 0.7577 | 0.758 | | 0.4459 | 31.67 | 7600 | 0.4840 | 0.7650 | 0.765 | | 0.4501 | 32.5 | 7800 | 0.4830 | 0.7690 | 0.769 | | 0.4428 | 33.33 | 8000 | 0.4846 | 0.7620 | 0.762 | | 0.447 | 34.17 | 8200 | 0.4849 | 0.7610 | 0.761 | | 0.4421 | 35.0 | 8400 | 0.4843 | 0.7640 | 0.764 | | 0.4451 | 35.83 | 8600 | 0.4838 | 0.7629 | 0.763 | | 0.4455 | 36.67 | 8800 | 0.4833 | 0.7619 | 0.762 | | 0.4418 | 37.5 | 9000 | 0.4854 | 0.7596 | 0.76 | | 0.4439 | 38.33 | 9200 | 0.4842 | 0.7650 | 0.765 | | 0.4419 | 39.17 | 9400 | 0.4843 | 0.7650 | 0.765 | | 0.4433 | 40.0 | 9600 | 0.4838 | 0.7650 | 0.765 | | 0.4434 | 40.83 | 9800 | 0.4840 | 0.7660 | 0.766 | | 0.4449 | 41.67 | 10000 | 0.4835 | 0.7640 | 0.764 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_1-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:07:34+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_1-seqsight\_16384\_512\_22M-L8\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.3459 * F1 Score: 0.8615 * Accuracy: 0.862 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3298 - F1 Score: 0.8619 - Accuracy: 0.862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.551 | 0.83 | 200 | 0.5412 | 0.7312 | 0.732 | | 0.5002 | 1.67 | 400 | 0.5301 | 0.7351 | 0.736 | | 0.4921 | 2.5 | 600 | 0.5180 | 0.754 | 0.754 | | 0.4827 | 3.33 | 800 | 0.5099 | 0.7540 | 0.754 | | 0.4809 | 4.17 | 1000 | 0.5067 | 0.7569 | 0.757 | | 0.4756 | 5.0 | 1200 | 0.5040 | 0.7600 | 0.76 | | 0.4702 | 5.83 | 1400 | 0.4994 | 0.7488 | 0.749 | | 0.4675 | 6.67 | 1600 | 0.5032 | 0.7560 | 0.756 | | 0.4654 | 7.5 | 1800 | 0.4949 | 0.7543 | 0.755 | | 0.4591 | 8.33 | 2000 | 0.4884 | 0.7600 | 0.76 | | 0.4596 | 9.17 | 2200 | 0.4867 | 0.7610 | 0.761 | | 0.4546 | 10.0 | 2400 | 0.5111 | 0.7467 | 0.749 | | 0.4533 | 10.83 | 2600 | 0.4952 | 0.7563 | 0.757 | | 0.4488 | 11.67 | 2800 | 0.4906 | 0.7692 | 0.77 | | 0.4441 | 12.5 | 3000 | 0.4886 | 0.7568 | 0.757 | | 0.4511 | 13.33 | 3200 | 0.4865 | 0.7549 | 0.755 | | 0.4427 | 14.17 | 3400 | 0.4851 | 0.7639 | 0.764 | | 0.4406 | 15.0 | 3600 | 0.4846 | 0.7614 | 0.762 | | 0.4391 | 15.83 | 3800 | 0.4855 | 0.7607 | 0.761 | | 0.4402 | 16.67 | 4000 | 0.4885 | 0.7636 | 0.764 | | 0.4367 | 17.5 | 4200 | 0.4848 | 0.7537 | 0.754 | | 0.4332 | 18.33 | 4400 | 0.4897 | 0.7600 | 0.76 | | 0.4354 | 19.17 | 4600 | 0.4884 | 0.7526 | 0.753 | | 0.4339 | 20.0 | 4800 | 0.4865 | 0.7619 | 0.762 | | 0.4316 | 20.83 | 5000 | 0.4902 | 0.7510 | 0.751 | | 0.428 | 21.67 | 5200 | 0.4905 | 0.7560 | 0.756 | | 0.4251 | 22.5 | 5400 | 0.4972 | 0.7580 | 0.758 | | 0.4278 | 23.33 | 5600 | 0.4882 | 0.7535 | 0.754 | | 0.4228 | 24.17 | 5800 | 0.4877 | 0.7546 | 0.755 | | 0.424 | 25.0 | 6000 | 0.4889 | 0.7620 | 0.762 | | 0.4206 | 25.83 | 6200 | 0.4881 | 0.7549 | 0.755 | | 0.4185 | 26.67 | 6400 | 0.4890 | 0.7578 | 0.758 | | 0.4208 | 27.5 | 6600 | 0.4916 | 0.7610 | 0.761 | | 0.4165 | 28.33 | 6800 | 0.4913 | 0.7590 | 0.759 | | 0.4178 | 29.17 | 7000 | 0.4933 | 0.7550 | 0.755 | | 0.4165 | 30.0 | 7200 | 0.4884 | 0.7569 | 0.757 | | 0.4167 | 30.83 | 7400 | 0.4927 | 0.7578 | 0.758 | | 0.4124 | 31.67 | 7600 | 0.4917 | 0.7560 | 0.756 | | 0.4153 | 32.5 | 7800 | 0.4938 | 0.7548 | 0.755 | | 0.4094 | 33.33 | 8000 | 0.4925 | 0.7539 | 0.754 | | 0.4107 | 34.17 | 8200 | 0.4925 | 0.7539 | 0.754 | | 0.4082 | 35.0 | 8400 | 0.4948 | 0.7550 | 0.755 | | 0.4083 | 35.83 | 8600 | 0.4925 | 0.7539 | 0.754 | | 0.4091 | 36.67 | 8800 | 0.4920 | 0.7539 | 0.754 | | 0.406 | 37.5 | 9000 | 0.4947 | 0.7557 | 0.756 | | 0.4063 | 38.33 | 9200 | 0.4943 | 0.7500 | 0.75 | | 0.4033 | 39.17 | 9400 | 0.4963 | 0.7549 | 0.755 | | 0.4054 | 40.0 | 9600 | 0.4942 | 0.7509 | 0.751 | | 0.4054 | 40.83 | 9800 | 0.4948 | 0.7520 | 0.752 | | 0.4059 | 41.67 | 10000 | 0.4946 | 0.7529 | 0.753 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_1-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:08:23+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_1-seqsight\_16384\_512\_22M-L32\_f =========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.3298 * F1 Score: 0.8619 * Accuracy: 0.862 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.3693 - F1 Score: 0.8407 - Accuracy: 0.841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.59 | 1.34 | 200 | 0.5461 | 0.7201 | 0.721 | | 0.5222 | 2.68 | 400 | 0.5202 | 0.7409 | 0.743 | | 0.4998 | 4.03 | 600 | 0.5086 | 0.7506 | 0.752 | | 0.4889 | 5.37 | 800 | 0.4997 | 0.7549 | 0.755 | | 0.4785 | 6.71 | 1000 | 0.4924 | 0.7504 | 0.751 | | 0.4752 | 8.05 | 1200 | 0.4863 | 0.7640 | 0.764 | | 0.4724 | 9.4 | 1400 | 0.4887 | 0.7569 | 0.757 | | 0.4665 | 10.74 | 1600 | 0.4902 | 0.7612 | 0.762 | | 0.464 | 12.08 | 1800 | 0.4884 | 0.7554 | 0.756 | | 0.4621 | 13.42 | 2000 | 0.4813 | 0.7604 | 0.761 | | 0.4569 | 14.77 | 2200 | 0.4858 | 0.7527 | 0.753 | | 0.4589 | 16.11 | 2400 | 0.4810 | 0.7698 | 0.77 | | 0.4503 | 17.45 | 2600 | 0.4844 | 0.7650 | 0.765 | | 0.4517 | 18.79 | 2800 | 0.4822 | 0.7640 | 0.764 | | 0.4497 | 20.13 | 3000 | 0.4806 | 0.7599 | 0.76 | | 0.447 | 21.48 | 3200 | 0.4804 | 0.7619 | 0.762 | | 0.4455 | 22.82 | 3400 | 0.4824 | 0.7620 | 0.762 | | 0.4443 | 24.16 | 3600 | 0.4785 | 0.7670 | 0.767 | | 0.4447 | 25.5 | 3800 | 0.4792 | 0.7607 | 0.761 | | 0.4407 | 26.85 | 4000 | 0.4794 | 0.7619 | 0.762 | | 0.4428 | 28.19 | 4200 | 0.4766 | 0.76 | 0.76 | | 0.433 | 29.53 | 4400 | 0.4819 | 0.7580 | 0.758 | | 0.4358 | 30.87 | 4600 | 0.4764 | 0.7540 | 0.754 | | 0.4374 | 32.21 | 4800 | 0.4761 | 0.7620 | 0.762 | | 0.4329 | 33.56 | 5000 | 0.4789 | 0.7608 | 0.761 | | 0.4332 | 34.9 | 5200 | 0.4760 | 0.7650 | 0.765 | | 0.4352 | 36.24 | 5400 | 0.4749 | 0.7580 | 0.758 | | 0.4305 | 37.58 | 5600 | 0.4755 | 0.7660 | 0.766 | | 0.4282 | 38.93 | 5800 | 0.4778 | 0.7640 | 0.764 | | 0.432 | 40.27 | 6000 | 0.4793 | 0.7650 | 0.765 | | 0.4275 | 41.61 | 6200 | 0.4757 | 0.7610 | 0.761 | | 0.4299 | 42.95 | 6400 | 0.4744 | 0.7710 | 0.771 | | 0.4305 | 44.3 | 6600 | 0.4707 | 0.7660 | 0.766 | | 0.4213 | 45.64 | 6800 | 0.4745 | 0.7759 | 0.776 | | 0.4278 | 46.98 | 7000 | 0.4744 | 0.7749 | 0.775 | | 0.4298 | 48.32 | 7200 | 0.4700 | 0.7739 | 0.774 | | 0.4233 | 49.66 | 7400 | 0.4752 | 0.7656 | 0.766 | | 0.4221 | 51.01 | 7600 | 0.4700 | 0.77 | 0.77 | | 0.4245 | 52.35 | 7800 | 0.4700 | 0.7710 | 0.771 | | 0.4196 | 53.69 | 8000 | 0.4723 | 0.7709 | 0.771 | | 0.4261 | 55.03 | 8200 | 0.4719 | 0.7729 | 0.773 | | 0.4225 | 56.38 | 8400 | 0.4712 | 0.7750 | 0.775 | | 0.4233 | 57.72 | 8600 | 0.4689 | 0.7629 | 0.763 | | 0.4195 | 59.06 | 8800 | 0.4711 | 0.7740 | 0.774 | | 0.4217 | 60.4 | 9000 | 0.4701 | 0.7740 | 0.774 | | 0.4209 | 61.74 | 9200 | 0.4692 | 0.774 | 0.774 | | 0.4226 | 63.09 | 9400 | 0.4697 | 0.7740 | 0.774 | | 0.4231 | 64.43 | 9600 | 0.4691 | 0.774 | 0.774 | | 0.4193 | 65.77 | 9800 | 0.4692 | 0.7730 | 0.773 | | 0.419 | 67.11 | 10000 | 0.4695 | 0.7750 | 0.775 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_4-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:08:23+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_4-seqsight\_16384\_512\_22M-L1\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.3693 * F1 Score: 0.8407 * Accuracy: 0.841 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
reinforcement-learning
stable-baselines3
# **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.53 +/- 0.92", "name": "mean_reward", "verified": false}]}]}]}
hossniper/a2c-PandaReachDense-v3
null
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-27T09:09:24+00:00
[]
[]
TAGS #stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# A2C Agent playing PandaReachDense-v3 This is a trained model of a A2C agent playing PandaReachDense-v3 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.3687 - F1 Score: 0.8417 - Accuracy: 0.842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5628 | 1.34 | 200 | 0.5121 | 0.7489 | 0.749 | | 0.4873 | 2.68 | 400 | 0.4964 | 0.7567 | 0.757 | | 0.4735 | 4.03 | 600 | 0.4893 | 0.7537 | 0.754 | | 0.462 | 5.37 | 800 | 0.4913 | 0.7600 | 0.76 | | 0.4521 | 6.71 | 1000 | 0.4901 | 0.7541 | 0.755 | | 0.4472 | 8.05 | 1200 | 0.4818 | 0.7590 | 0.759 | | 0.4426 | 9.4 | 1400 | 0.4839 | 0.7630 | 0.763 | | 0.4381 | 10.74 | 1600 | 0.4853 | 0.7630 | 0.763 | | 0.4333 | 12.08 | 1800 | 0.4911 | 0.7484 | 0.75 | | 0.4325 | 13.42 | 2000 | 0.4765 | 0.7620 | 0.762 | | 0.4258 | 14.77 | 2200 | 0.4880 | 0.7568 | 0.757 | | 0.4269 | 16.11 | 2400 | 0.4817 | 0.7580 | 0.758 | | 0.4187 | 17.45 | 2600 | 0.4903 | 0.7577 | 0.758 | | 0.4195 | 18.79 | 2800 | 0.4845 | 0.7650 | 0.765 | | 0.4162 | 20.13 | 3000 | 0.4854 | 0.7596 | 0.76 | | 0.412 | 21.48 | 3200 | 0.4828 | 0.7660 | 0.766 | | 0.4125 | 22.82 | 3400 | 0.4799 | 0.7650 | 0.765 | | 0.407 | 24.16 | 3600 | 0.4763 | 0.7680 | 0.768 | | 0.4063 | 25.5 | 3800 | 0.4766 | 0.7675 | 0.768 | | 0.4002 | 26.85 | 4000 | 0.4831 | 0.7616 | 0.762 | | 0.4043 | 28.19 | 4200 | 0.4708 | 0.7710 | 0.771 | | 0.3958 | 29.53 | 4400 | 0.4887 | 0.7696 | 0.77 | | 0.3966 | 30.87 | 4600 | 0.4698 | 0.7760 | 0.776 | | 0.3957 | 32.21 | 4800 | 0.4749 | 0.7690 | 0.769 | | 0.3926 | 33.56 | 5000 | 0.4746 | 0.7670 | 0.767 | | 0.3906 | 34.9 | 5200 | 0.4731 | 0.7730 | 0.773 | | 0.3899 | 36.24 | 5400 | 0.4698 | 0.772 | 0.772 | | 0.3861 | 37.58 | 5600 | 0.4749 | 0.7699 | 0.77 | | 0.3835 | 38.93 | 5800 | 0.4806 | 0.7708 | 0.771 | | 0.3865 | 40.27 | 6000 | 0.4772 | 0.7750 | 0.775 | | 0.3814 | 41.61 | 6200 | 0.4762 | 0.7678 | 0.768 | | 0.3829 | 42.95 | 6400 | 0.4837 | 0.7625 | 0.763 | | 0.3836 | 44.3 | 6600 | 0.4683 | 0.7730 | 0.773 | | 0.374 | 45.64 | 6800 | 0.4824 | 0.7616 | 0.762 | | 0.3813 | 46.98 | 7000 | 0.4806 | 0.7674 | 0.768 | | 0.3827 | 48.32 | 7200 | 0.4709 | 0.7698 | 0.77 | | 0.3766 | 49.66 | 7400 | 0.4799 | 0.7583 | 0.759 | | 0.3727 | 51.01 | 7600 | 0.4728 | 0.7679 | 0.768 | | 0.3734 | 52.35 | 7800 | 0.4731 | 0.7800 | 0.78 | | 0.3689 | 53.69 | 8000 | 0.4813 | 0.7696 | 0.77 | | 0.3741 | 55.03 | 8200 | 0.4813 | 0.7572 | 0.758 | | 0.3718 | 56.38 | 8400 | 0.4761 | 0.7749 | 0.775 | | 0.3716 | 57.72 | 8600 | 0.4722 | 0.7750 | 0.775 | | 0.3703 | 59.06 | 8800 | 0.4741 | 0.7750 | 0.775 | | 0.3718 | 60.4 | 9000 | 0.4755 | 0.7699 | 0.77 | | 0.3668 | 61.74 | 9200 | 0.4724 | 0.7770 | 0.777 | | 0.3688 | 63.09 | 9400 | 0.4729 | 0.7760 | 0.776 | | 0.3705 | 64.43 | 9600 | 0.4728 | 0.7760 | 0.776 | | 0.3666 | 65.77 | 9800 | 0.4741 | 0.7740 | 0.774 | | 0.3681 | 67.11 | 10000 | 0.4740 | 0.7740 | 0.774 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_4-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:10:23+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_4-seqsight\_16384\_512\_22M-L8\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.3687 * F1 Score: 0.8417 * Accuracy: 0.842 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
{"library_name": "peft"}
lekhapinninti/llama-2-7b-enhanced-10epoch
null
[ "peft", "region:us" ]
null
2024-04-27T09:11:15+00:00
[]
[]
TAGS #peft #region-us
## Training procedure The following 'bitsandbytes' quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following 'bitsandbytes' quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
[ "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n- PEFT 0.4.0\n\n- PEFT 0.4.0" ]
[ "TAGS\n#peft #region-us \n", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n- PEFT 0.4.0\n\n- PEFT 0.4.0" ]
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 Finetuned Vi - FredDYyy This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 0.4772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5534 | 10.06 | 1000 | 0.5056 | | 0.528 | 20.13 | 2000 | 0.4843 | | 0.5119 | 30.19 | 3000 | 0.4811 | | 0.4994 | 40.25 | 4000 | 0.4772 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["vi"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_13_0"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "SpeechT5 Finetuned Vi - FredDYyy", "results": []}]}
FredDYyy/speecht5_finetuned_vi
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "vi", "dataset:mozilla-foundation/common_voice_13_0", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-27T09:12:16+00:00
[]
[ "vi" ]
TAGS #transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #vi #dataset-mozilla-foundation/common_voice_13_0 #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
SpeechT5 Finetuned Vi - FredDYyy ================================ This model is a fine-tuned version of microsoft/speecht5\_tts on the Common Voice 13 dataset. It achieves the following results on the evaluation set: * Loss: 0.4772 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 4 * eval\_batch\_size: 2 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #vi #dataset-mozilla-foundation/common_voice_13_0 #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Mohamedshaaban2001/llama3_text2sql
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T09:12:27+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Function Calling Fine-tuned Phi 3 Instruct This model is fine-tuned for function calling. - The model is suitable for commercial use. Check out other fine-tuned function calling models [here](https://huggingface.co/collections/Trelis/function-calling-v3-657199ecbe378693925c7915). ## Quick Server Setup Runpod one click TGI template [here](https://runpod.io/console/deploy?template=h9pnbylvph&ref=jmfkcdio). [AWAITING [THIS FIX](https://github.com/huggingface/text-generation-inference/issues/1807) - See this [YouTube Video](https://www.youtube.com/watch?v=hHn_cV5WUDI) for guidance on inference with this model. Runpod Affiliate [Link](https://runpod.io?ref=jmfkcdio) (helps support the Trelis channel). ## Inference Scripts See below for sample prompt format. Complete inference scripts are available for purchase [here](https://trelis.com/enterprise-server-api-and-inference-guide/): - Support for TGI, vLLM and Llama.cpp - Automate catching, handling and chaining of function calls. ## Prompt Format ### Using tokenizer.apply_chat_template For an easier application of the prompt, you can set up as follows (note that the conversation below is complete, i.e. you need to remove assistant messages if you want to feed in the conversation to the model): Set up `messages`: ``` [ { "role": "function_metadata", "content": "FUNCTION_METADATA" }, { "role": "user", "content": "What is the current weather in London?" }, { "role": "function_call", "content": "{\n \"name\": \"get_current_weather\",\n \"arguments\": {\n \"city\": \"London\"\n }\n}" }, { "role": "function_response", "content": "{\n \"temperature\": \"15 C\",\n \"condition\": \"Cloudy\"\n}" }, { "role": "assistant", "content": "The current weather in London is Cloudy with a temperature of 15 Celsius" } ] ``` with `FUNCTION_METADATA` as: ``` [ { "type": "function", "function": { "name": "get_current_weather", "description": "This function gets the current weather in a given city", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city, e.g., San Francisco" }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use." } }, "required": ["city"] } } }, { "type": "function", "function": { "name": "get_clothes", "description": "This function provides a suggestion of clothes to wear based on the current weather", "parameters": { "type": "object", "properties": { "temperature": { "type": "string", "description": "The temperature, e.g., 15 C or 59 F" }, "condition": { "type": "string", "description": "The weather condition, e.g., 'Cloudy', 'Sunny', 'Rainy'" } }, "required": ["temperature", "condition"] } } } ] ``` and then apply the chat template to get a formatted prompt: ``` tokenizer = AutoTokenizer.from_pretrained('Trelis/Phi-3-mini-128k-instruct-function-calling', trust_remote_code=True) prompt = tokenizer.apply_chat_template(prompt, tokenize=False) ``` If you are using a gated model, you need to first run: ``` pip install huggingface_hub huggingface-cli login ``` ### Manual Prompt: ``` <s><|function_metadata|> [ { "type": "function", "function": { "name": "get_stock_price", "description": "Get the stock price of an array of stocks", "parameters": { "type": "object", "properties": { "names": { "type": "array", "items": { "type": "string" }, "description": "An array of stocks" } }, "required": [ "names" ] } } }, { "type": "function", "function": { "name": "get_big_stocks", "description": "Get the names of the largest N stocks by market cap", "parameters": { "type": "object", "properties": { "number": { "type": "integer", "description": "The number of largest stocks to get the names of, e.g. 25" }, "region": { "type": "string", "description": "The region to consider, can be \"US\" or \"World\"." } }, "required": [ "number" ] } } } ]<|end|> <|user|> Get the names of the five largest stocks by market cap<|end|> <|assistant|> Correct Response: { "name": "get_big_stocks", "arguments": { "number": "5" } } Generated Response: ```json { "function": "get_big_stocks", "parameters": { "number": 5, "region": "World" } } ```<|end|><|endoftext|> ``` # Dataset See [Trelis/function_calling_v3](https://huggingface.co/datasets/Trelis/function_calling_v3). ~~~ The original repo card follows below. ~~~ ## Model Summary The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + Phi-3 ONNX: [128K](https://aka.ms/Phi3-mini-128k-instruct-onnx) ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-128K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. ### Tokenizer Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion<|end|>\n<|assistant|> ``` For example: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>`. In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-128k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct") messages = [ {"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. | | Phi-3-Mini-128K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 | |---|---|---|---|---|---|---|---|---|---| | MMLU <br>5-Shot | 68.1 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 | | HellaSwag <br> 5-Shot | 74.5 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 | | ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 | | GSM-8K <br> 0-Shot; CoT | 83.6 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 | | MedQA <br> 2-Shot | 55.3 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 | | AGIEval <br> 0-Shot | 36.9 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 | | TriviaQA <br> 5-Shot | 57.1 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 | | Arc-C <br> 10-Shot | 84.0 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 | | Arc-E <br> 10-Shot | 95.2 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 | | PIQA <br> 5-Shot | 83.6 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 | | SociQA <br> 5-Shot | 76.1 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 | | BigBench-Hard <br> 0-Shot | 71.5 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 | | WinoGrande <br> 5-Shot | 72.5 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65.0 | 62.0 | 68.8 | | OpenBookQA <br> 10-Shot | 80.6 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 | | BoolQ <br> 0-Shot | 78.7 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 | | CommonSenseQA <br> 10-Shot | 78.0 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 | | TruthfulQA <br> 10-Shot | 63.2 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 | | HumanEval <br> 0-Shot | 57.9 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4| 37.8 | 62.2 | | MBPP <br> 3-Shot | 62.5 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx) ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model [here](https://aka.ms/phi3-mini-128k-instruct-onnx). Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
{"language": ["en"], "tags": ["nlp", "code", "phi-3", "function-calling"], "datasets": ["Trelis/function_calling_v3"], "pipeline_tag": "text-generation", "widget": [{"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}], "extra_gated_prompt": "Purchase access to this repo [HERE](https://buy.stripe.com/00g14Q7BX2HxaMU3dM)!"}
Trelis/Phi-3-mini-128k-instruct-function-calling
null
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "phi-3", "function-calling", "conversational", "custom_code", "en", "dataset:Trelis/function_calling_v3", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T09:12:46+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #phi3 #text-generation #nlp #code #phi-3 #function-calling #conversational #custom_code #en #dataset-Trelis/function_calling_v3 #autotrain_compatible #endpoints_compatible #region-us
Function Calling Fine-tuned Phi 3 Instruct ========================================== This model is fine-tuned for function calling. * The model is suitable for commercial use. Check out other fine-tuned function calling models here. Quick Server Setup ------------------ Runpod one click TGI template here. AWAITING [THIS FIX * See this YouTube Video for guidance on inference with this model. Runpod Affiliate Link (helps support the Trelis channel). Inference Scripts ----------------- See below for sample prompt format. Complete inference scripts are available for purchase here: * Support for TGI, vLLM and URL * Automate catching, handling and chaining of function calls. Prompt Format ------------- ### Using tokenizer.apply\_chat\_template For an easier application of the prompt, you can set up as follows (note that the conversation below is complete, i.e. you need to remove assistant messages if you want to feed in the conversation to the model): Set up 'messages': with 'FUNCTION\_METADATA' as: and then apply the chat template to get a formatted prompt: If you are using a gated model, you need to first run: ### Manual Prompt: json { "function": "get\_big\_stocks", "parameters": { "number": 5, "region": "World" } } Dataset ======= See Trelis/function\_calling\_v3. ``` The original repo card follows below. ``` Model Summary ------------- The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support. After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation: * Phi-3 Microsoft Blog * Phi-3 Technical Report * Phi-3 on Azure AI Studio * Phi-3 ONNX: 128K Intended Uses ------------- Primary use cases The model is intended for commercial and research use in English. The model provides uses for applications which require: 1. Memory/compute constrained environments 2. Latency bound scenarios 3. Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. Use case considerations Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. How to Use ---------- Phi-3 Mini-128K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following: * When loading the model, ensure that 'trust\_remote\_code=True' is passed as an argument of the 'from\_pretrained()' function. * Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source. The current 'transformers' version can be verified with: 'pip list | grep transformers'. ### Tokenizer Phi-3 Mini-128K-Instruct supports a vocabulary size of up to '32064' tokens. The tokenizer files already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: For example: where the model generates the text after '<|assistant|>'. In case of few-shots prompt, the prompt can be formatted as the following: ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: *Some applications/frameworks might not include a BOS token ('~~') at the start of the conversation. Please ensure that it is included since it provides more reliable results.~~* Responsible AI Considerations ----------------------------- Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: * Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. * Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. * Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. * Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. * Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: * Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. * High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. * Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). * Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. * Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. Training -------- ### Model * Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here. Benchmarks ---------- We report the results for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. Software -------- * PyTorch * DeepSpeed * Transformers * Flash-Attention Hardware -------- Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\_pretrained() with attn\_implementation="eager" * Optimized inference on GPU, CPU, and Mobile: use the ONNX models 128K Cross Platform Support ---------------------- ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model here. Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN License ------- The model is licensed under the MIT license. Trademarks ---------- This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
[ "### Using tokenizer.apply\\_chat\\_template\n\n\nFor an easier application of the prompt, you can set up as follows (note that the conversation below is complete, i.e. you need to remove assistant messages if you want to feed in the conversation to the model):\n\n\nSet up 'messages':\n\n\nwith 'FUNCTION\\_METADATA' as:\n\n\nand then apply the chat template to get a formatted prompt:\n\n\nIf you are using a gated model, you need to first run:", "### Manual Prompt:\n\n\njson\n{\n\"function\": \"get\\_big\\_stocks\",\n\"parameters\": {\n\"number\": 5,\n\"region\": \"World\"\n}\n}\n\n\nDataset\n=======\n\n\nSee Trelis/function\\_calling\\_v3.\n\n\n\n```\nThe original repo card follows below.\n\n```\n\nModel Summary\n-------------\n\n\nThe Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.\nThis dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.\nThe model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.\n\n\nAfter initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.\nWhen evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.\nResources and Technical Documentation:\n\n\n* Phi-3 Microsoft Blog\n* Phi-3 Technical Report\n* Phi-3 on Azure AI Studio\n* Phi-3 ONNX: 128K\n\n\nIntended Uses\n-------------\n\n\nPrimary use cases\n\n\nThe model is intended for commercial and research use in English. The model provides uses for applications which require:\n\n\n1. Memory/compute constrained environments\n2. Latency bound scenarios\n3. Strong reasoning (especially code, math and logic)\n\n\nOur model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.\n\n\nUse case considerations\n\n\nOur models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.\n\n\nNothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.\n\n\nHow to Use\n----------\n\n\nPhi-3 Mini-128K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:\n\n\n* When loading the model, ensure that 'trust\\_remote\\_code=True' is passed as an argument of the 'from\\_pretrained()' function.\n* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.\n\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.", "### Tokenizer\n\n\nPhi-3 Mini-128K-Instruct supports a vocabulary size of up to '32064' tokens. The tokenizer files already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.", "### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>'. In case of few-shots prompt, the prompt can be formatted as the following:", "### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\n*Some applications/frameworks might not include a BOS token ('~~') at the start of the conversation. Please ensure that it is included since it provides more reliable results.~~*\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------", "### Model\n\n\n* Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 128K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.", "### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.", "### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 128K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies." ]
[ "TAGS\n#transformers #safetensors #phi3 #text-generation #nlp #code #phi-3 #function-calling #conversational #custom_code #en #dataset-Trelis/function_calling_v3 #autotrain_compatible #endpoints_compatible #region-us \n", "### Using tokenizer.apply\\_chat\\_template\n\n\nFor an easier application of the prompt, you can set up as follows (note that the conversation below is complete, i.e. you need to remove assistant messages if you want to feed in the conversation to the model):\n\n\nSet up 'messages':\n\n\nwith 'FUNCTION\\_METADATA' as:\n\n\nand then apply the chat template to get a formatted prompt:\n\n\nIf you are using a gated model, you need to first run:", "### Manual Prompt:\n\n\njson\n{\n\"function\": \"get\\_big\\_stocks\",\n\"parameters\": {\n\"number\": 5,\n\"region\": \"World\"\n}\n}\n\n\nDataset\n=======\n\n\nSee Trelis/function\\_calling\\_v3.\n\n\n\n```\nThe original repo card follows below.\n\n```\n\nModel Summary\n-------------\n\n\nThe Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.\nThis dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.\nThe model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.\n\n\nAfter initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.\nWhen evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.\nResources and Technical Documentation:\n\n\n* Phi-3 Microsoft Blog\n* Phi-3 Technical Report\n* Phi-3 on Azure AI Studio\n* Phi-3 ONNX: 128K\n\n\nIntended Uses\n-------------\n\n\nPrimary use cases\n\n\nThe model is intended for commercial and research use in English. The model provides uses for applications which require:\n\n\n1. Memory/compute constrained environments\n2. Latency bound scenarios\n3. Strong reasoning (especially code, math and logic)\n\n\nOur model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.\n\n\nUse case considerations\n\n\nOur models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.\n\n\nNothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.\n\n\nHow to Use\n----------\n\n\nPhi-3 Mini-128K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:\n\n\n* When loading the model, ensure that 'trust\\_remote\\_code=True' is passed as an argument of the 'from\\_pretrained()' function.\n* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.\n\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.", "### Tokenizer\n\n\nPhi-3 Mini-128K-Instruct supports a vocabulary size of up to '32064' tokens. The tokenizer files already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.", "### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>'. In case of few-shots prompt, the prompt can be formatted as the following:", "### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\n*Some applications/frameworks might not include a BOS token ('~~') at the start of the conversation. Please ensure that it is included since it provides more reliable results.~~*\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------", "### Model\n\n\n* Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 128K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.", "### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.", "### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 128K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.3864 - F1 Score: 0.8470 - Accuracy: 0.847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5446 | 1.34 | 200 | 0.5061 | 0.7539 | 0.754 | | 0.4745 | 2.68 | 400 | 0.4960 | 0.7550 | 0.755 | | 0.46 | 4.03 | 600 | 0.4894 | 0.7570 | 0.757 | | 0.4463 | 5.37 | 800 | 0.4906 | 0.7680 | 0.768 | | 0.4369 | 6.71 | 1000 | 0.4858 | 0.7594 | 0.76 | | 0.4298 | 8.05 | 1200 | 0.4772 | 0.7669 | 0.767 | | 0.4221 | 9.4 | 1400 | 0.4799 | 0.7670 | 0.767 | | 0.4139 | 10.74 | 1600 | 0.4825 | 0.7699 | 0.77 | | 0.405 | 12.08 | 1800 | 0.4862 | 0.7709 | 0.772 | | 0.402 | 13.42 | 2000 | 0.4680 | 0.7759 | 0.776 | | 0.3922 | 14.77 | 2200 | 0.4783 | 0.7717 | 0.772 | | 0.3907 | 16.11 | 2400 | 0.4696 | 0.7770 | 0.777 | | 0.3787 | 17.45 | 2600 | 0.4953 | 0.7679 | 0.769 | | 0.3742 | 18.79 | 2800 | 0.4803 | 0.7708 | 0.771 | | 0.3672 | 20.13 | 3000 | 0.4769 | 0.7635 | 0.764 | | 0.3638 | 21.48 | 3200 | 0.4701 | 0.7710 | 0.771 | | 0.3609 | 22.82 | 3400 | 0.4716 | 0.7920 | 0.792 | | 0.3513 | 24.16 | 3600 | 0.4730 | 0.7749 | 0.775 | | 0.3451 | 25.5 | 3800 | 0.4650 | 0.7899 | 0.79 | | 0.3382 | 26.85 | 4000 | 0.4742 | 0.7809 | 0.781 | | 0.3404 | 28.19 | 4200 | 0.4721 | 0.7830 | 0.783 | | 0.329 | 29.53 | 4400 | 0.4892 | 0.7890 | 0.789 | | 0.3272 | 30.87 | 4600 | 0.4692 | 0.7859 | 0.786 | | 0.3209 | 32.21 | 4800 | 0.4715 | 0.7860 | 0.786 | | 0.3175 | 33.56 | 5000 | 0.4721 | 0.7850 | 0.785 | | 0.3174 | 34.9 | 5200 | 0.4652 | 0.7950 | 0.795 | | 0.3104 | 36.24 | 5400 | 0.4744 | 0.8030 | 0.803 | | 0.3056 | 37.58 | 5600 | 0.4802 | 0.7930 | 0.793 | | 0.2987 | 38.93 | 5800 | 0.4793 | 0.7970 | 0.797 | | 0.3024 | 40.27 | 6000 | 0.4931 | 0.7970 | 0.797 | | 0.2957 | 41.61 | 6200 | 0.4825 | 0.7930 | 0.793 | | 0.2936 | 42.95 | 6400 | 0.4902 | 0.7988 | 0.799 | | 0.2988 | 44.3 | 6600 | 0.4628 | 0.8110 | 0.811 | | 0.2818 | 45.64 | 6800 | 0.4771 | 0.8099 | 0.81 | | 0.2869 | 46.98 | 7000 | 0.4816 | 0.8080 | 0.808 | | 0.2858 | 48.32 | 7200 | 0.4766 | 0.8080 | 0.808 | | 0.282 | 49.66 | 7400 | 0.4757 | 0.8140 | 0.814 | | 0.2777 | 51.01 | 7600 | 0.4768 | 0.8140 | 0.814 | | 0.2759 | 52.35 | 7800 | 0.4744 | 0.8120 | 0.812 | | 0.2717 | 53.69 | 8000 | 0.4882 | 0.8139 | 0.814 | | 0.2754 | 55.03 | 8200 | 0.4802 | 0.8100 | 0.81 | | 0.2708 | 56.38 | 8400 | 0.4811 | 0.8090 | 0.809 | | 0.2688 | 57.72 | 8600 | 0.4794 | 0.8120 | 0.812 | | 0.2689 | 59.06 | 8800 | 0.4737 | 0.8100 | 0.81 | | 0.2691 | 60.4 | 9000 | 0.4791 | 0.8100 | 0.81 | | 0.2624 | 61.74 | 9200 | 0.4763 | 0.8100 | 0.81 | | 0.2646 | 63.09 | 9400 | 0.4772 | 0.8090 | 0.809 | | 0.2655 | 64.43 | 9600 | 0.4742 | 0.8090 | 0.809 | | 0.2621 | 65.77 | 9800 | 0.4772 | 0.8100 | 0.81 | | 0.2644 | 67.11 | 10000 | 0.4775 | 0.8100 | 0.81 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_4-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:14:06+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_4-seqsight\_16384\_512\_22M-L32\_f =========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.3864 * F1 Score: 0.8470 * Accuracy: 0.847 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5776 - F1 Score: 0.6912 - Accuracy: 0.693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6531 | 0.93 | 200 | 0.5997 | 0.6821 | 0.682 | | 0.6217 | 1.87 | 400 | 0.5819 | 0.7028 | 0.704 | | 0.6118 | 2.8 | 600 | 0.5774 | 0.7052 | 0.712 | | 0.6033 | 3.74 | 800 | 0.5708 | 0.7083 | 0.712 | | 0.6024 | 4.67 | 1000 | 0.5686 | 0.7052 | 0.71 | | 0.5982 | 5.61 | 1200 | 0.5673 | 0.7096 | 0.711 | | 0.5949 | 6.54 | 1400 | 0.5660 | 0.7076 | 0.713 | | 0.5958 | 7.48 | 1600 | 0.5637 | 0.7140 | 0.715 | | 0.5906 | 8.41 | 1800 | 0.5617 | 0.7106 | 0.713 | | 0.5942 | 9.35 | 2000 | 0.5620 | 0.7079 | 0.711 | | 0.5893 | 10.28 | 2200 | 0.5633 | 0.7069 | 0.707 | | 0.5895 | 11.21 | 2400 | 0.5607 | 0.7079 | 0.709 | | 0.5879 | 12.15 | 2600 | 0.5598 | 0.6995 | 0.704 | | 0.583 | 13.08 | 2800 | 0.5589 | 0.6958 | 0.698 | | 0.585 | 14.02 | 3000 | 0.5598 | 0.6969 | 0.698 | | 0.5828 | 14.95 | 3200 | 0.5587 | 0.6971 | 0.698 | | 0.5819 | 15.89 | 3400 | 0.5577 | 0.6997 | 0.702 | | 0.5805 | 16.82 | 3600 | 0.5583 | 0.6985 | 0.699 | | 0.5793 | 17.76 | 3800 | 0.5572 | 0.7018 | 0.703 | | 0.5823 | 18.69 | 4000 | 0.5592 | 0.7010 | 0.702 | | 0.5784 | 19.63 | 4200 | 0.5560 | 0.7010 | 0.704 | | 0.5795 | 20.56 | 4400 | 0.5555 | 0.7061 | 0.707 | | 0.5767 | 21.5 | 4600 | 0.5589 | 0.7071 | 0.707 | | 0.5766 | 22.43 | 4800 | 0.5561 | 0.7007 | 0.702 | | 0.576 | 23.36 | 5000 | 0.5554 | 0.6993 | 0.7 | | 0.578 | 24.3 | 5200 | 0.5559 | 0.7010 | 0.701 | | 0.5709 | 25.23 | 5400 | 0.5539 | 0.7016 | 0.704 | | 0.5789 | 26.17 | 5600 | 0.5548 | 0.6954 | 0.696 | | 0.5741 | 27.1 | 5800 | 0.5530 | 0.7041 | 0.706 | | 0.5717 | 28.04 | 6000 | 0.5527 | 0.6980 | 0.699 | | 0.5728 | 28.97 | 6200 | 0.5558 | 0.6991 | 0.699 | | 0.5722 | 29.91 | 6400 | 0.5534 | 0.6896 | 0.69 | | 0.5715 | 30.84 | 6600 | 0.5523 | 0.6996 | 0.701 | | 0.5727 | 31.78 | 6800 | 0.5546 | 0.7020 | 0.702 | | 0.569 | 32.71 | 7000 | 0.5517 | 0.6977 | 0.699 | | 0.575 | 33.64 | 7200 | 0.5521 | 0.7022 | 0.703 | | 0.5654 | 34.58 | 7400 | 0.5549 | 0.7031 | 0.703 | | 0.5721 | 35.51 | 7600 | 0.5528 | 0.6968 | 0.697 | | 0.5697 | 36.45 | 7800 | 0.5530 | 0.6970 | 0.697 | | 0.5706 | 37.38 | 8000 | 0.5516 | 0.7014 | 0.702 | | 0.5661 | 38.32 | 8200 | 0.5512 | 0.7051 | 0.706 | | 0.5694 | 39.25 | 8400 | 0.5517 | 0.7006 | 0.701 | | 0.5693 | 40.19 | 8600 | 0.5522 | 0.7018 | 0.702 | | 0.5673 | 41.12 | 8800 | 0.5519 | 0.7047 | 0.705 | | 0.5704 | 42.06 | 9000 | 0.5516 | 0.7037 | 0.704 | | 0.5677 | 42.99 | 9200 | 0.5529 | 0.7010 | 0.701 | | 0.5659 | 43.93 | 9400 | 0.5528 | 0.7010 | 0.701 | | 0.5667 | 44.86 | 9600 | 0.5515 | 0.7047 | 0.705 | | 0.5699 | 45.79 | 9800 | 0.5517 | 0.7028 | 0.703 | | 0.566 | 46.73 | 10000 | 0.5516 | 0.7037 | 0.704 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_3-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:14:17+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_3-seqsight\_16384\_512\_22M-L1\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5776 * F1 Score: 0.6912 * Accuracy: 0.693 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5640 - F1 Score: 0.7002 - Accuracy: 0.702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6405 | 0.93 | 200 | 0.5815 | 0.7028 | 0.703 | | 0.6073 | 1.87 | 400 | 0.5741 | 0.6990 | 0.699 | | 0.5987 | 2.8 | 600 | 0.5656 | 0.6945 | 0.699 | | 0.5918 | 3.74 | 800 | 0.5675 | 0.6931 | 0.693 | | 0.5901 | 4.67 | 1000 | 0.5640 | 0.6985 | 0.699 | | 0.5856 | 5.61 | 1200 | 0.5597 | 0.6996 | 0.701 | | 0.583 | 6.54 | 1400 | 0.5606 | 0.6974 | 0.698 | | 0.5809 | 7.48 | 1600 | 0.5585 | 0.6988 | 0.699 | | 0.5751 | 8.41 | 1800 | 0.5614 | 0.6960 | 0.696 | | 0.5795 | 9.35 | 2000 | 0.5564 | 0.6888 | 0.69 | | 0.5732 | 10.28 | 2200 | 0.5624 | 0.6919 | 0.692 | | 0.572 | 11.21 | 2400 | 0.5557 | 0.6976 | 0.698 | | 0.5702 | 12.15 | 2600 | 0.5540 | 0.7051 | 0.708 | | 0.5654 | 13.08 | 2800 | 0.5560 | 0.7032 | 0.706 | | 0.5664 | 14.02 | 3000 | 0.5555 | 0.6941 | 0.694 | | 0.5641 | 14.95 | 3200 | 0.5522 | 0.7020 | 0.704 | | 0.5617 | 15.89 | 3400 | 0.5516 | 0.6965 | 0.698 | | 0.5634 | 16.82 | 3600 | 0.5520 | 0.6992 | 0.7 | | 0.5594 | 17.76 | 3800 | 0.5521 | 0.7050 | 0.706 | | 0.5629 | 18.69 | 4000 | 0.5572 | 0.6941 | 0.694 | | 0.559 | 19.63 | 4200 | 0.5512 | 0.7055 | 0.707 | | 0.5572 | 20.56 | 4400 | 0.5529 | 0.7066 | 0.707 | | 0.5557 | 21.5 | 4600 | 0.5617 | 0.6886 | 0.689 | | 0.5534 | 22.43 | 4800 | 0.5520 | 0.7043 | 0.705 | | 0.5543 | 23.36 | 5000 | 0.5559 | 0.6931 | 0.693 | | 0.5556 | 24.3 | 5200 | 0.5616 | 0.6881 | 0.689 | | 0.5479 | 25.23 | 5400 | 0.5555 | 0.7116 | 0.713 | | 0.5562 | 26.17 | 5600 | 0.5532 | 0.6989 | 0.699 | | 0.5504 | 27.1 | 5800 | 0.5518 | 0.7026 | 0.703 | | 0.5485 | 28.04 | 6000 | 0.5508 | 0.7076 | 0.708 | | 0.549 | 28.97 | 6200 | 0.5579 | 0.6950 | 0.695 | | 0.5492 | 29.91 | 6400 | 0.5541 | 0.6881 | 0.688 | | 0.5486 | 30.84 | 6600 | 0.5512 | 0.7086 | 0.709 | | 0.5459 | 31.78 | 6800 | 0.5546 | 0.7010 | 0.701 | | 0.5452 | 32.71 | 7000 | 0.5507 | 0.7026 | 0.703 | | 0.5513 | 33.64 | 7200 | 0.5512 | 0.7019 | 0.702 | | 0.5405 | 34.58 | 7400 | 0.5569 | 0.7007 | 0.701 | | 0.5481 | 35.51 | 7600 | 0.5539 | 0.6960 | 0.696 | | 0.5438 | 36.45 | 7800 | 0.5553 | 0.7010 | 0.701 | | 0.5462 | 37.38 | 8000 | 0.5521 | 0.7021 | 0.702 | | 0.5381 | 38.32 | 8200 | 0.5530 | 0.7027 | 0.703 | | 0.5452 | 39.25 | 8400 | 0.5521 | 0.7049 | 0.705 | | 0.5435 | 40.19 | 8600 | 0.5530 | 0.7061 | 0.706 | | 0.5421 | 41.12 | 8800 | 0.5523 | 0.7081 | 0.708 | | 0.5444 | 42.06 | 9000 | 0.5523 | 0.7061 | 0.706 | | 0.5412 | 42.99 | 9200 | 0.5534 | 0.7040 | 0.704 | | 0.5397 | 43.93 | 9400 | 0.5545 | 0.7020 | 0.702 | | 0.54 | 44.86 | 9600 | 0.5522 | 0.7081 | 0.708 | | 0.5435 | 45.79 | 9800 | 0.5522 | 0.7051 | 0.705 | | 0.5374 | 46.73 | 10000 | 0.5527 | 0.7051 | 0.705 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_3-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:14:47+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_3-seqsight\_16384\_512\_22M-L8\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5640 * F1 Score: 0.7002 * Accuracy: 0.702 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5519 - F1 Score: 0.7047 - Accuracy: 0.707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.633 | 0.93 | 200 | 0.5725 | 0.6982 | 0.7 | | 0.6014 | 1.87 | 400 | 0.5741 | 0.6990 | 0.699 | | 0.5924 | 2.8 | 600 | 0.5617 | 0.6975 | 0.7 | | 0.5827 | 3.74 | 800 | 0.5667 | 0.6898 | 0.69 | | 0.5795 | 4.67 | 1000 | 0.5591 | 0.7027 | 0.703 | | 0.5729 | 5.61 | 1200 | 0.5554 | 0.6992 | 0.701 | | 0.5701 | 6.54 | 1400 | 0.5541 | 0.6910 | 0.691 | | 0.5662 | 7.48 | 1600 | 0.5472 | 0.7045 | 0.705 | | 0.5594 | 8.41 | 1800 | 0.5548 | 0.6988 | 0.699 | | 0.5627 | 9.35 | 2000 | 0.5485 | 0.7087 | 0.71 | | 0.5563 | 10.28 | 2200 | 0.5543 | 0.6946 | 0.695 | | 0.5531 | 11.21 | 2400 | 0.5550 | 0.6999 | 0.7 | | 0.5509 | 12.15 | 2600 | 0.5459 | 0.7176 | 0.72 | | 0.5445 | 13.08 | 2800 | 0.5521 | 0.7114 | 0.715 | | 0.5438 | 14.02 | 3000 | 0.5548 | 0.6971 | 0.697 | | 0.5398 | 14.95 | 3200 | 0.5420 | 0.7167 | 0.719 | | 0.5368 | 15.89 | 3400 | 0.5412 | 0.7232 | 0.724 | | 0.5377 | 16.82 | 3600 | 0.5468 | 0.7190 | 0.72 | | 0.5319 | 17.76 | 3800 | 0.5422 | 0.7119 | 0.712 | | 0.5338 | 18.69 | 4000 | 0.5490 | 0.7121 | 0.712 | | 0.5271 | 19.63 | 4200 | 0.5370 | 0.7279 | 0.729 | | 0.5251 | 20.56 | 4400 | 0.5496 | 0.7140 | 0.714 | | 0.5222 | 21.5 | 4600 | 0.5531 | 0.7012 | 0.702 | | 0.519 | 22.43 | 4800 | 0.5439 | 0.7123 | 0.713 | | 0.5176 | 23.36 | 5000 | 0.5595 | 0.7037 | 0.704 | | 0.5182 | 24.3 | 5200 | 0.5581 | 0.7063 | 0.707 | | 0.509 | 25.23 | 5400 | 0.5625 | 0.7206 | 0.721 | | 0.5159 | 26.17 | 5600 | 0.5478 | 0.7210 | 0.721 | | 0.5073 | 27.1 | 5800 | 0.5500 | 0.7169 | 0.717 | | 0.5057 | 28.04 | 6000 | 0.5538 | 0.7231 | 0.723 | | 0.5062 | 28.97 | 6200 | 0.5637 | 0.7089 | 0.709 | | 0.5031 | 29.91 | 6400 | 0.5569 | 0.7060 | 0.706 | | 0.5021 | 30.84 | 6600 | 0.5489 | 0.7230 | 0.723 | | 0.4994 | 31.78 | 6800 | 0.5557 | 0.7075 | 0.708 | | 0.4997 | 32.71 | 7000 | 0.5454 | 0.7251 | 0.725 | | 0.5018 | 33.64 | 7200 | 0.5508 | 0.7090 | 0.709 | | 0.4901 | 34.58 | 7400 | 0.5590 | 0.7042 | 0.705 | | 0.4969 | 35.51 | 7600 | 0.5553 | 0.6997 | 0.7 | | 0.4927 | 36.45 | 7800 | 0.5554 | 0.7110 | 0.711 | | 0.4946 | 37.38 | 8000 | 0.5510 | 0.7250 | 0.725 | | 0.4847 | 38.32 | 8200 | 0.5567 | 0.7220 | 0.722 | | 0.4907 | 39.25 | 8400 | 0.5609 | 0.7181 | 0.718 | | 0.4907 | 40.19 | 8600 | 0.5583 | 0.7090 | 0.709 | | 0.4889 | 41.12 | 8800 | 0.5537 | 0.7151 | 0.715 | | 0.4887 | 42.06 | 9000 | 0.5550 | 0.7161 | 0.716 | | 0.4849 | 42.99 | 9200 | 0.5617 | 0.7045 | 0.705 | | 0.4858 | 43.93 | 9400 | 0.5630 | 0.7075 | 0.708 | | 0.4832 | 44.86 | 9600 | 0.5610 | 0.7089 | 0.709 | | 0.4855 | 45.79 | 9800 | 0.5608 | 0.7130 | 0.713 | | 0.4797 | 46.73 | 10000 | 0.5614 | 0.7130 | 0.713 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_3-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:15:27+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_3-seqsight\_16384\_512\_22M-L32\_f =========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5519 * F1 Score: 0.7047 * Accuracy: 0.707 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/9lg2om0
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T09:15:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4637 - F1 Score: 0.7756 - Accuracy: 0.776 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6103 | 1.34 | 200 | 0.5705 | 0.7010 | 0.706 | | 0.5646 | 2.68 | 400 | 0.5452 | 0.7186 | 0.72 | | 0.5482 | 4.03 | 600 | 0.5374 | 0.7304 | 0.731 | | 0.5393 | 5.37 | 800 | 0.5322 | 0.7329 | 0.733 | | 0.5362 | 6.71 | 1000 | 0.5356 | 0.7327 | 0.734 | | 0.531 | 8.05 | 1200 | 0.5296 | 0.7240 | 0.724 | | 0.5279 | 9.4 | 1400 | 0.5301 | 0.7346 | 0.736 | | 0.5262 | 10.74 | 1600 | 0.5260 | 0.7382 | 0.739 | | 0.5228 | 12.08 | 1800 | 0.5310 | 0.7290 | 0.729 | | 0.5222 | 13.42 | 2000 | 0.5255 | 0.7429 | 0.743 | | 0.5218 | 14.77 | 2200 | 0.5217 | 0.7375 | 0.738 | | 0.5166 | 16.11 | 2400 | 0.5247 | 0.7310 | 0.731 | | 0.5191 | 17.45 | 2600 | 0.5207 | 0.7416 | 0.742 | | 0.5124 | 18.79 | 2800 | 0.5205 | 0.7358 | 0.736 | | 0.513 | 20.13 | 3000 | 0.5195 | 0.7387 | 0.739 | | 0.5144 | 21.48 | 3200 | 0.5215 | 0.7350 | 0.735 | | 0.5133 | 22.82 | 3400 | 0.5188 | 0.7386 | 0.739 | | 0.5098 | 24.16 | 3600 | 0.5224 | 0.7216 | 0.724 | | 0.5089 | 25.5 | 3800 | 0.5217 | 0.7334 | 0.735 | | 0.5116 | 26.85 | 4000 | 0.5212 | 0.7337 | 0.735 | | 0.5063 | 28.19 | 4200 | 0.5193 | 0.7402 | 0.741 | | 0.5061 | 29.53 | 4400 | 0.5192 | 0.7369 | 0.737 | | 0.5035 | 30.87 | 4600 | 0.5179 | 0.7376 | 0.738 | | 0.5063 | 32.21 | 4800 | 0.5171 | 0.7399 | 0.74 | | 0.5034 | 33.56 | 5000 | 0.5159 | 0.7390 | 0.739 | | 0.5041 | 34.9 | 5200 | 0.5169 | 0.7367 | 0.737 | | 0.4995 | 36.24 | 5400 | 0.5166 | 0.7373 | 0.738 | | 0.5057 | 37.58 | 5600 | 0.5148 | 0.7368 | 0.737 | | 0.5023 | 38.93 | 5800 | 0.5132 | 0.7328 | 0.733 | | 0.4981 | 40.27 | 6000 | 0.5151 | 0.7347 | 0.735 | | 0.4999 | 41.61 | 6200 | 0.5156 | 0.7318 | 0.732 | | 0.5008 | 42.95 | 6400 | 0.5155 | 0.7359 | 0.736 | | 0.4969 | 44.3 | 6600 | 0.5140 | 0.7379 | 0.738 | | 0.5002 | 45.64 | 6800 | 0.5142 | 0.7347 | 0.735 | | 0.4971 | 46.98 | 7000 | 0.5144 | 0.7344 | 0.735 | | 0.4969 | 48.32 | 7200 | 0.5154 | 0.7390 | 0.739 | | 0.4972 | 49.66 | 7400 | 0.5130 | 0.7326 | 0.733 | | 0.4942 | 51.01 | 7600 | 0.5142 | 0.7336 | 0.734 | | 0.4953 | 52.35 | 7800 | 0.5142 | 0.7308 | 0.731 | | 0.496 | 53.69 | 8000 | 0.5134 | 0.7306 | 0.731 | | 0.4917 | 55.03 | 8200 | 0.5138 | 0.7349 | 0.735 | | 0.4971 | 56.38 | 8400 | 0.5137 | 0.7360 | 0.736 | | 0.4939 | 57.72 | 8600 | 0.5136 | 0.7319 | 0.732 | | 0.4927 | 59.06 | 8800 | 0.5131 | 0.7347 | 0.735 | | 0.4911 | 60.4 | 9000 | 0.5138 | 0.7349 | 0.735 | | 0.4974 | 61.74 | 9200 | 0.5128 | 0.7386 | 0.739 | | 0.4955 | 63.09 | 9400 | 0.5128 | 0.7348 | 0.735 | | 0.4937 | 64.43 | 9600 | 0.5128 | 0.7358 | 0.736 | | 0.4942 | 65.77 | 9800 | 0.5129 | 0.7319 | 0.732 | | 0.4914 | 67.11 | 10000 | 0.5129 | 0.7367 | 0.737 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_2-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:15:38+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_2-seqsight\_16384\_512\_22M-L1\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.4637 * F1 Score: 0.7756 * Accuracy: 0.776 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold4 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window16-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window16-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0255 - Accuracy: 0.6605 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5422 | 1.0 | 924 | 1.4764 | 0.5075 | | 1.2973 | 2.0 | 1848 | 1.2000 | 0.5841 | | 1.161 | 3.0 | 2772 | 1.1897 | 0.5857 | | 1.1967 | 4.0 | 3696 | 1.1001 | 0.6172 | | 1.1015 | 5.0 | 4620 | 1.0649 | 0.6299 | | 0.8173 | 6.0 | 5544 | 1.0497 | 0.6407 | | 0.8829 | 7.0 | 6468 | 1.0344 | 0.6491 | | 1.0073 | 8.0 | 7392 | 1.0447 | 0.6513 | | 0.9166 | 9.0 | 8316 | 1.0398 | 0.6513 | | 0.9029 | 10.0 | 9240 | 1.0255 | 0.6605 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swinv2-tiny-patch4-window16-256", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold4", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6605256028176646, "name": "Accuracy"}]}]}]}
onizukal/Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold4
null
[ "transformers", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-tiny-patch4-window16-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T09:16:01+00:00
[]
[]
TAGS #transformers #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-tiny-patch4-window16-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Boya1\_RMSProp\_1-e5\_10Epoch\_swinv2-tiny-patch4-window16-256\_fold4 ===================================================================== This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window16-256 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 1.0255 * Accuracy: 0.6605 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.35.0 * Pytorch 2.1.0 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#transformers #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-tiny-patch4-window16-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4840 - F1 Score: 0.7789 - Accuracy: 0.779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5782 | 1.34 | 200 | 0.5356 | 0.7289 | 0.73 | | 0.5351 | 2.68 | 400 | 0.5267 | 0.7290 | 0.729 | | 0.5225 | 4.03 | 600 | 0.5212 | 0.7367 | 0.737 | | 0.5121 | 5.37 | 800 | 0.5203 | 0.7398 | 0.74 | | 0.5081 | 6.71 | 1000 | 0.5223 | 0.7339 | 0.734 | | 0.5003 | 8.05 | 1200 | 0.5231 | 0.7223 | 0.723 | | 0.4927 | 9.4 | 1400 | 0.5197 | 0.7304 | 0.731 | | 0.4902 | 10.74 | 1600 | 0.5129 | 0.7330 | 0.733 | | 0.4847 | 12.08 | 1800 | 0.5212 | 0.7369 | 0.737 | | 0.4829 | 13.42 | 2000 | 0.5225 | 0.7319 | 0.732 | | 0.4787 | 14.77 | 2200 | 0.5221 | 0.7252 | 0.727 | | 0.4707 | 16.11 | 2400 | 0.5179 | 0.7360 | 0.736 | | 0.4727 | 17.45 | 2600 | 0.5146 | 0.7391 | 0.74 | | 0.4656 | 18.79 | 2800 | 0.5099 | 0.7438 | 0.744 | | 0.4607 | 20.13 | 3000 | 0.5138 | 0.7420 | 0.742 | | 0.4593 | 21.48 | 3200 | 0.5310 | 0.7332 | 0.734 | | 0.4585 | 22.82 | 3400 | 0.5255 | 0.7360 | 0.736 | | 0.4515 | 24.16 | 3600 | 0.5233 | 0.7305 | 0.731 | | 0.4471 | 25.5 | 3800 | 0.5207 | 0.7337 | 0.734 | | 0.4437 | 26.85 | 4000 | 0.5266 | 0.7284 | 0.729 | | 0.4406 | 28.19 | 4200 | 0.5231 | 0.7439 | 0.744 | | 0.4365 | 29.53 | 4400 | 0.5399 | 0.7289 | 0.729 | | 0.4307 | 30.87 | 4600 | 0.5266 | 0.7349 | 0.735 | | 0.4318 | 32.21 | 4800 | 0.5375 | 0.7370 | 0.737 | | 0.4284 | 33.56 | 5000 | 0.5447 | 0.7284 | 0.729 | | 0.4262 | 34.9 | 5200 | 0.5368 | 0.7268 | 0.727 | | 0.4179 | 36.24 | 5400 | 0.5416 | 0.7370 | 0.737 | | 0.4219 | 37.58 | 5600 | 0.5390 | 0.7280 | 0.728 | | 0.4163 | 38.93 | 5800 | 0.5324 | 0.7250 | 0.725 | | 0.4091 | 40.27 | 6000 | 0.5540 | 0.7244 | 0.725 | | 0.4094 | 41.61 | 6200 | 0.5440 | 0.7210 | 0.721 | | 0.4092 | 42.95 | 6400 | 0.5670 | 0.7265 | 0.727 | | 0.4018 | 44.3 | 6600 | 0.5576 | 0.7259 | 0.726 | | 0.4016 | 45.64 | 6800 | 0.5645 | 0.7208 | 0.721 | | 0.3998 | 46.98 | 7000 | 0.5529 | 0.7220 | 0.722 | | 0.3936 | 48.32 | 7200 | 0.5677 | 0.7248 | 0.725 | | 0.3943 | 49.66 | 7400 | 0.5703 | 0.7210 | 0.721 | | 0.3911 | 51.01 | 7600 | 0.5648 | 0.7268 | 0.727 | | 0.3914 | 52.35 | 7800 | 0.5692 | 0.7240 | 0.724 | | 0.3902 | 53.69 | 8000 | 0.5635 | 0.7260 | 0.726 | | 0.386 | 55.03 | 8200 | 0.5661 | 0.7210 | 0.721 | | 0.3887 | 56.38 | 8400 | 0.5718 | 0.7197 | 0.72 | | 0.386 | 57.72 | 8600 | 0.5625 | 0.728 | 0.728 | | 0.3826 | 59.06 | 8800 | 0.5694 | 0.7300 | 0.73 | | 0.382 | 60.4 | 9000 | 0.5785 | 0.7249 | 0.725 | | 0.3845 | 61.74 | 9200 | 0.5721 | 0.7300 | 0.73 | | 0.3811 | 63.09 | 9400 | 0.5740 | 0.7270 | 0.727 | | 0.3789 | 64.43 | 9600 | 0.5734 | 0.7270 | 0.727 | | 0.3826 | 65.77 | 9800 | 0.5740 | 0.7299 | 0.73 | | 0.3779 | 67.11 | 10000 | 0.5736 | 0.7270 | 0.727 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_2-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:16:17+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_2-seqsight\_16384\_512\_22M-L32\_f =========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.4840 * F1 Score: 0.7789 * Accuracy: 0.779 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4616 - F1 Score: 0.7690 - Accuracy: 0.769 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5929 | 1.34 | 200 | 0.5474 | 0.7144 | 0.716 | | 0.5432 | 2.68 | 400 | 0.5322 | 0.7269 | 0.727 | | 0.5315 | 4.03 | 600 | 0.5275 | 0.7375 | 0.738 | | 0.523 | 5.37 | 800 | 0.5251 | 0.7375 | 0.738 | | 0.5217 | 6.71 | 1000 | 0.5242 | 0.7355 | 0.736 | | 0.5142 | 8.05 | 1200 | 0.5255 | 0.7330 | 0.733 | | 0.5094 | 9.4 | 1400 | 0.5210 | 0.7315 | 0.733 | | 0.5081 | 10.74 | 1600 | 0.5183 | 0.7321 | 0.733 | | 0.5047 | 12.08 | 1800 | 0.5229 | 0.7330 | 0.733 | | 0.5024 | 13.42 | 2000 | 0.5240 | 0.7420 | 0.742 | | 0.5022 | 14.77 | 2200 | 0.5214 | 0.7279 | 0.73 | | 0.4951 | 16.11 | 2400 | 0.5208 | 0.7379 | 0.738 | | 0.498 | 17.45 | 2600 | 0.5174 | 0.7313 | 0.732 | | 0.492 | 18.79 | 2800 | 0.5164 | 0.7292 | 0.73 | | 0.4895 | 20.13 | 3000 | 0.5178 | 0.7295 | 0.73 | | 0.4909 | 21.48 | 3200 | 0.5239 | 0.7277 | 0.728 | | 0.4924 | 22.82 | 3400 | 0.5168 | 0.7370 | 0.737 | | 0.4859 | 24.16 | 3600 | 0.5166 | 0.7272 | 0.728 | | 0.4853 | 25.5 | 3800 | 0.5152 | 0.7418 | 0.742 | | 0.4848 | 26.85 | 4000 | 0.5201 | 0.7312 | 0.732 | | 0.483 | 28.19 | 4200 | 0.5161 | 0.7389 | 0.739 | | 0.4807 | 29.53 | 4400 | 0.5236 | 0.7410 | 0.741 | | 0.478 | 30.87 | 4600 | 0.5208 | 0.7370 | 0.737 | | 0.4806 | 32.21 | 4800 | 0.5207 | 0.7390 | 0.739 | | 0.4756 | 33.56 | 5000 | 0.5268 | 0.7375 | 0.738 | | 0.4764 | 34.9 | 5200 | 0.5229 | 0.7360 | 0.736 | | 0.4715 | 36.24 | 5400 | 0.5242 | 0.7289 | 0.729 | | 0.4771 | 37.58 | 5600 | 0.5217 | 0.7350 | 0.735 | | 0.4734 | 38.93 | 5800 | 0.5210 | 0.7310 | 0.731 | | 0.4694 | 40.27 | 6000 | 0.5226 | 0.7296 | 0.73 | | 0.4719 | 41.61 | 6200 | 0.5233 | 0.7300 | 0.73 | | 0.4713 | 42.95 | 6400 | 0.5263 | 0.7389 | 0.739 | | 0.4654 | 44.3 | 6600 | 0.5233 | 0.7360 | 0.736 | | 0.4686 | 45.64 | 6800 | 0.5257 | 0.7319 | 0.732 | | 0.4656 | 46.98 | 7000 | 0.5219 | 0.7330 | 0.733 | | 0.4644 | 48.32 | 7200 | 0.5268 | 0.7320 | 0.732 | | 0.4656 | 49.66 | 7400 | 0.5214 | 0.7330 | 0.733 | | 0.4624 | 51.01 | 7600 | 0.5266 | 0.732 | 0.732 | | 0.4622 | 52.35 | 7800 | 0.5281 | 0.732 | 0.732 | | 0.4641 | 53.69 | 8000 | 0.5234 | 0.7300 | 0.73 | | 0.4595 | 55.03 | 8200 | 0.5247 | 0.7290 | 0.729 | | 0.4636 | 56.38 | 8400 | 0.5268 | 0.7369 | 0.737 | | 0.4602 | 57.72 | 8600 | 0.5228 | 0.7300 | 0.73 | | 0.4601 | 59.06 | 8800 | 0.5238 | 0.7320 | 0.732 | | 0.4571 | 60.4 | 9000 | 0.5287 | 0.7319 | 0.732 | | 0.463 | 61.74 | 9200 | 0.5243 | 0.7270 | 0.727 | | 0.4595 | 63.09 | 9400 | 0.5258 | 0.7280 | 0.728 | | 0.4601 | 64.43 | 9600 | 0.5250 | 0.7310 | 0.731 | | 0.4589 | 65.77 | 9800 | 0.5258 | 0.7290 | 0.729 | | 0.4566 | 67.11 | 10000 | 0.5255 | 0.7280 | 0.728 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_2-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:16:17+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_2-seqsight\_16384\_512\_22M-L8\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.4616 * F1 Score: 0.7690 * Accuracy: 0.769 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# stablelm-2-zephyr-1.6b-slerpx10 stablelm-2-zephyr-1.6b-slerpx10 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [aipib/stablelm-2-zephyr-1.6b-slerpx9](https://huggingface.co/aipib/stablelm-2-zephyr-1.6b-slerpx9) * [stabilityai/stablelm-2-zephyr-1_6b](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b) ## 🧩 Configuration ```yaml slices: - sources: - model: aipib/stablelm-2-zephyr-1.6b-slerpx9 layer_range: [0, 24] - model: stabilityai/stablelm-2-zephyr-1_6b layer_range: [0, 24] merge_method: slerp base_model: aipib/stablelm-2-zephyr-1.6b-slerpx9 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "aipib/stablelm-2-zephyr-1.6b-slerpx10" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "aipib/stablelm-2-zephyr-1.6b-slerpx9", "stabilityai/stablelm-2-zephyr-1_6b"], "base_model": ["aipib/stablelm-2-zephyr-1.6b-slerpx9", "stabilityai/stablelm-2-zephyr-1_6b"]}
aipib/stablelm-2-zephyr-1.6b-slerpx10
null
[ "transformers", "safetensors", "stablelm", "text-generation", "merge", "mergekit", "lazymergekit", "aipib/stablelm-2-zephyr-1.6b-slerpx9", "stabilityai/stablelm-2-zephyr-1_6b", "conversational", "base_model:aipib/stablelm-2-zephyr-1.6b-slerpx9", "base_model:stabilityai/stablelm-2-zephyr-1_6b", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T09:16:35+00:00
[]
[]
TAGS #transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #aipib/stablelm-2-zephyr-1.6b-slerpx9 #stabilityai/stablelm-2-zephyr-1_6b #conversational #base_model-aipib/stablelm-2-zephyr-1.6b-slerpx9 #base_model-stabilityai/stablelm-2-zephyr-1_6b #autotrain_compatible #endpoints_compatible #region-us
# stablelm-2-zephyr-1.6b-slerpx10 stablelm-2-zephyr-1.6b-slerpx10 is a merge of the following models using LazyMergekit: * aipib/stablelm-2-zephyr-1.6b-slerpx9 * stabilityai/stablelm-2-zephyr-1_6b ## Configuration ## Usage
[ "# stablelm-2-zephyr-1.6b-slerpx10\n\nstablelm-2-zephyr-1.6b-slerpx10 is a merge of the following models using LazyMergekit:\n* aipib/stablelm-2-zephyr-1.6b-slerpx9\n* stabilityai/stablelm-2-zephyr-1_6b", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #aipib/stablelm-2-zephyr-1.6b-slerpx9 #stabilityai/stablelm-2-zephyr-1_6b #conversational #base_model-aipib/stablelm-2-zephyr-1.6b-slerpx9 #base_model-stabilityai/stablelm-2-zephyr-1_6b #autotrain_compatible #endpoints_compatible #region-us \n", "# stablelm-2-zephyr-1.6b-slerpx10\n\nstablelm-2-zephyr-1.6b-slerpx10 is a merge of the following models using LazyMergekit:\n* aipib/stablelm-2-zephyr-1.6b-slerpx9\n* stabilityai/stablelm-2-zephyr-1_6b", "## Configuration", "## Usage" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Andro9669/gemma-7b-ner
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T09:16:42+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
nizamudma/Enlighten_Instruct_Mistral
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-27T09:19:19+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs128_nodpo_only4w_iter_3 This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2](https://huggingface.co/ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2", "model-index": [{"name": "0.001_4iters_bs128_nodpo_only4w_iter_3", "results": []}]}
ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_3
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T09:19:52+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_4iters_bs128_nodpo_only4w_iter_3 This model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.001_4iters_bs128_nodpo_only4w_iter_3\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_4iters_bs128_nodpo_only4w_iter_3\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.6944 - F1 Score: 0.3658 - Accuracy: 0.3593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.184 | 0.35 | 200 | 2.1806 | 0.0984 | 0.1410 | | 2.1758 | 0.7 | 400 | 2.1685 | 0.1274 | 0.1546 | | 2.1622 | 1.05 | 600 | 2.1553 | 0.1425 | 0.1584 | | 2.1497 | 1.4 | 800 | 2.1472 | 0.1332 | 0.1660 | | 2.1433 | 1.75 | 1000 | 2.1346 | 0.1575 | 0.1816 | | 2.1378 | 2.09 | 1200 | 2.1314 | 0.1536 | 0.1814 | | 2.1273 | 2.44 | 1400 | 2.1180 | 0.1825 | 0.1977 | | 2.1109 | 2.79 | 1600 | 2.0769 | 0.2212 | 0.2410 | | 2.0785 | 3.14 | 1800 | 2.0240 | 0.2335 | 0.2455 | | 2.0481 | 3.49 | 2000 | 2.0070 | 0.2210 | 0.2366 | | 2.0342 | 3.84 | 2200 | 1.9759 | 0.2551 | 0.2642 | | 2.0152 | 4.19 | 2400 | 1.9467 | 0.2583 | 0.2641 | | 1.9896 | 4.54 | 2600 | 1.9240 | 0.2606 | 0.2705 | | 1.9737 | 4.89 | 2800 | 1.9050 | 0.2755 | 0.2839 | | 1.9589 | 5.24 | 3000 | 1.8869 | 0.2745 | 0.2808 | | 1.9479 | 5.58 | 3200 | 1.8701 | 0.2895 | 0.2978 | | 1.9272 | 5.93 | 3400 | 1.8528 | 0.2860 | 0.2997 | | 1.9178 | 6.28 | 3600 | 1.8392 | 0.2965 | 0.3055 | | 1.9001 | 6.63 | 3800 | 1.8250 | 0.3025 | 0.3105 | | 1.8966 | 6.98 | 4000 | 1.8116 | 0.3258 | 0.3264 | | 1.883 | 7.33 | 4200 | 1.8034 | 0.3098 | 0.3157 | | 1.8767 | 7.68 | 4400 | 1.7900 | 0.3284 | 0.3247 | | 1.8691 | 8.03 | 4600 | 1.7822 | 0.3320 | 0.3329 | | 1.8488 | 8.38 | 4800 | 1.7719 | 0.3205 | 0.3213 | | 1.8547 | 8.73 | 5000 | 1.7690 | 0.3199 | 0.3298 | | 1.8439 | 9.08 | 5200 | 1.7554 | 0.3440 | 0.3359 | | 1.8313 | 9.42 | 5400 | 1.7501 | 0.3355 | 0.3386 | | 1.8329 | 9.77 | 5600 | 1.7463 | 0.3311 | 0.3323 | | 1.8221 | 10.12 | 5800 | 1.7415 | 0.3416 | 0.3376 | | 1.8163 | 10.47 | 6000 | 1.7370 | 0.3497 | 0.3462 | | 1.8151 | 10.82 | 6200 | 1.7373 | 0.3455 | 0.3403 | | 1.8001 | 11.17 | 6400 | 1.7307 | 0.3544 | 0.3462 | | 1.8027 | 11.52 | 6600 | 1.7220 | 0.3595 | 0.3556 | | 1.7998 | 11.87 | 6800 | 1.7185 | 0.3544 | 0.3508 | | 1.7991 | 12.22 | 7000 | 1.7180 | 0.3561 | 0.3515 | | 1.7894 | 12.57 | 7200 | 1.7138 | 0.3513 | 0.3510 | | 1.7912 | 12.91 | 7400 | 1.7112 | 0.3590 | 0.3571 | | 1.786 | 13.26 | 7600 | 1.7099 | 0.3555 | 0.3554 | | 1.7827 | 13.61 | 7800 | 1.7071 | 0.3570 | 0.3547 | | 1.7851 | 13.96 | 8000 | 1.7045 | 0.3632 | 0.3584 | | 1.7762 | 14.31 | 8200 | 1.7039 | 0.3622 | 0.3583 | | 1.7805 | 14.66 | 8400 | 1.7010 | 0.3603 | 0.3550 | | 1.7831 | 15.01 | 8600 | 1.7021 | 0.3588 | 0.3558 | | 1.7868 | 15.36 | 8800 | 1.6991 | 0.3655 | 0.3590 | | 1.7638 | 15.71 | 9000 | 1.7003 | 0.3590 | 0.3572 | | 1.773 | 16.06 | 9200 | 1.6972 | 0.3630 | 0.3606 | | 1.7804 | 16.4 | 9400 | 1.6979 | 0.3610 | 0.3589 | | 1.7643 | 16.75 | 9600 | 1.6970 | 0.3617 | 0.3592 | | 1.7752 | 17.1 | 9800 | 1.6966 | 0.3654 | 0.3624 | | 1.7749 | 17.45 | 10000 | 1.6965 | 0.3646 | 0.3609 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_virus_covid-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:20:14+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_virus\_covid-seqsight\_16384\_512\_22M-L1\_f ================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset. It achieves the following results on the evaluation set: * Loss: 1.6944 * F1 Score: 0.3658 * Accuracy: 0.3593 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.3754 - F1 Score: 0.4766 - Accuracy: 0.4791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.183 | 0.35 | 200 | 2.1753 | 0.1143 | 0.1463 | | 2.168 | 0.7 | 400 | 2.1554 | 0.1426 | 0.1638 | | 2.1485 | 1.05 | 600 | 2.1399 | 0.1613 | 0.1782 | | 2.1112 | 1.4 | 800 | 2.0532 | 0.1978 | 0.2303 | | 2.0402 | 1.75 | 1000 | 1.9520 | 0.2582 | 0.2706 | | 1.9812 | 2.09 | 1200 | 1.8951 | 0.2654 | 0.2796 | | 1.9312 | 2.44 | 1400 | 1.8417 | 0.3154 | 0.3140 | | 1.8974 | 2.79 | 1600 | 1.8054 | 0.3053 | 0.3178 | | 1.8532 | 3.14 | 1800 | 1.7496 | 0.3306 | 0.3415 | | 1.8195 | 3.49 | 2000 | 1.7189 | 0.3322 | 0.3394 | | 1.7957 | 3.84 | 2200 | 1.6947 | 0.3599 | 0.3673 | | 1.7691 | 4.19 | 2400 | 1.6777 | 0.3578 | 0.3557 | | 1.7459 | 4.54 | 2600 | 1.6312 | 0.3753 | 0.3800 | | 1.7264 | 4.89 | 2800 | 1.5969 | 0.3991 | 0.3980 | | 1.7065 | 5.24 | 3000 | 1.5840 | 0.3877 | 0.3929 | | 1.6869 | 5.58 | 3200 | 1.5602 | 0.4092 | 0.4092 | | 1.667 | 5.93 | 3400 | 1.5537 | 0.4011 | 0.4070 | | 1.658 | 6.28 | 3600 | 1.5381 | 0.4159 | 0.4101 | | 1.6335 | 6.63 | 3800 | 1.5251 | 0.4197 | 0.4226 | | 1.6272 | 6.98 | 4000 | 1.5171 | 0.4268 | 0.4243 | | 1.6189 | 7.33 | 4200 | 1.5043 | 0.4287 | 0.4277 | | 1.6091 | 7.68 | 4400 | 1.4997 | 0.4320 | 0.4278 | | 1.6083 | 8.03 | 4600 | 1.4909 | 0.4262 | 0.4298 | | 1.592 | 8.38 | 4800 | 1.4845 | 0.4329 | 0.4339 | | 1.5949 | 8.73 | 5000 | 1.4783 | 0.4324 | 0.4365 | | 1.5811 | 9.08 | 5200 | 1.4724 | 0.4457 | 0.4391 | | 1.5776 | 9.42 | 5400 | 1.4589 | 0.4407 | 0.4475 | | 1.5626 | 9.77 | 5600 | 1.4548 | 0.4399 | 0.4434 | | 1.5606 | 10.12 | 5800 | 1.4455 | 0.4404 | 0.4388 | | 1.5556 | 10.47 | 6000 | 1.4411 | 0.4515 | 0.4530 | | 1.5497 | 10.82 | 6200 | 1.4415 | 0.4445 | 0.4424 | | 1.5319 | 11.17 | 6400 | 1.4394 | 0.4505 | 0.4475 | | 1.5385 | 11.52 | 6600 | 1.4339 | 0.4481 | 0.4517 | | 1.5371 | 11.87 | 6800 | 1.4245 | 0.4490 | 0.4537 | | 1.5316 | 12.22 | 7000 | 1.4166 | 0.4562 | 0.4582 | | 1.5238 | 12.57 | 7200 | 1.4126 | 0.4631 | 0.4619 | | 1.5213 | 12.91 | 7400 | 1.4115 | 0.4552 | 0.4576 | | 1.5128 | 13.26 | 7600 | 1.4098 | 0.4589 | 0.4621 | | 1.5101 | 13.61 | 7800 | 1.4081 | 0.4622 | 0.4638 | | 1.514 | 13.96 | 8000 | 1.3972 | 0.4661 | 0.4668 | | 1.5063 | 14.31 | 8200 | 1.3991 | 0.4621 | 0.4664 | | 1.506 | 14.66 | 8400 | 1.3933 | 0.4733 | 0.4689 | | 1.5063 | 15.01 | 8600 | 1.3922 | 0.4724 | 0.4741 | | 1.5076 | 15.36 | 8800 | 1.3908 | 0.4719 | 0.4677 | | 1.4882 | 15.71 | 9000 | 1.3858 | 0.4737 | 0.4767 | | 1.4974 | 16.06 | 9200 | 1.3873 | 0.4726 | 0.4723 | | 1.493 | 16.4 | 9400 | 1.3830 | 0.4748 | 0.4758 | | 1.494 | 16.75 | 9600 | 1.3838 | 0.4776 | 0.4780 | | 1.4954 | 17.1 | 9800 | 1.3843 | 0.4761 | 0.4773 | | 1.4972 | 17.45 | 10000 | 1.3839 | 0.4762 | 0.4763 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_virus_covid-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:20:26+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_virus\_covid-seqsight\_16384\_512\_22M-L8\_f ================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset. It achieves the following results on the evaluation set: * Loss: 1.3754 * F1 Score: 0.4766 * Accuracy: 0.4791 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.1612 - F1 Score: 0.5677 - Accuracy: 0.5543 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.1812 | 0.35 | 200 | 2.1672 | 0.1391 | 0.1588 | | 2.1584 | 0.7 | 400 | 2.1443 | 0.1573 | 0.1804 | | 2.1066 | 1.05 | 600 | 2.0052 | 0.2448 | 0.2480 | | 1.9632 | 1.4 | 800 | 1.8515 | 0.2767 | 0.2916 | | 1.8723 | 1.75 | 1000 | 1.7607 | 0.3235 | 0.3249 | | 1.8149 | 2.09 | 1200 | 1.6958 | 0.3517 | 0.3556 | | 1.7576 | 2.44 | 1400 | 1.6498 | 0.3824 | 0.3734 | | 1.7077 | 2.79 | 1600 | 1.6015 | 0.3790 | 0.3794 | | 1.6659 | 3.14 | 1800 | 1.5665 | 0.3971 | 0.3976 | | 1.6344 | 3.49 | 2000 | 1.5340 | 0.4192 | 0.4187 | | 1.6228 | 3.84 | 2200 | 1.5033 | 0.4299 | 0.4351 | | 1.5867 | 4.19 | 2400 | 1.4659 | 0.4603 | 0.4440 | | 1.5656 | 4.54 | 2600 | 1.4402 | 0.4638 | 0.4561 | | 1.549 | 4.89 | 2800 | 1.4424 | 0.4612 | 0.4551 | | 1.5203 | 5.24 | 3000 | 1.4167 | 0.4709 | 0.4663 | | 1.5152 | 5.58 | 3200 | 1.3929 | 0.4737 | 0.4711 | | 1.4973 | 5.93 | 3400 | 1.3898 | 0.4785 | 0.4786 | | 1.4863 | 6.28 | 3600 | 1.3780 | 0.4960 | 0.4768 | | 1.4616 | 6.63 | 3800 | 1.3671 | 0.4853 | 0.4819 | | 1.4598 | 6.98 | 4000 | 1.3554 | 0.4896 | 0.4841 | | 1.4535 | 7.33 | 4200 | 1.3384 | 0.4983 | 0.4926 | | 1.4274 | 7.68 | 4400 | 1.3435 | 0.4958 | 0.4863 | | 1.4269 | 8.03 | 4600 | 1.3118 | 0.5042 | 0.5036 | | 1.4097 | 8.38 | 4800 | 1.2962 | 0.5136 | 0.5104 | | 1.4067 | 8.73 | 5000 | 1.2886 | 0.5196 | 0.5116 | | 1.3773 | 9.08 | 5200 | 1.2697 | 0.5287 | 0.5199 | | 1.3703 | 9.42 | 5400 | 1.2614 | 0.5225 | 0.5249 | | 1.3633 | 9.77 | 5600 | 1.2499 | 0.5329 | 0.5260 | | 1.3536 | 10.12 | 5800 | 1.2483 | 0.5298 | 0.5203 | | 1.3502 | 10.47 | 6000 | 1.2393 | 0.5280 | 0.5205 | | 1.336 | 10.82 | 6200 | 1.2345 | 0.5471 | 0.5327 | | 1.3166 | 11.17 | 6400 | 1.2281 | 0.5455 | 0.5351 | | 1.3271 | 11.52 | 6600 | 1.2199 | 0.5461 | 0.5312 | | 1.315 | 11.87 | 6800 | 1.2072 | 0.5432 | 0.5356 | | 1.3081 | 12.22 | 7000 | 1.1983 | 0.5519 | 0.5420 | | 1.2946 | 12.57 | 7200 | 1.1989 | 0.5517 | 0.5391 | | 1.2906 | 12.91 | 7400 | 1.1945 | 0.5510 | 0.5392 | | 1.2855 | 13.26 | 7600 | 1.1876 | 0.5470 | 0.5353 | | 1.2782 | 13.61 | 7800 | 1.1817 | 0.5538 | 0.5428 | | 1.2784 | 13.96 | 8000 | 1.1800 | 0.5596 | 0.5508 | | 1.2633 | 14.31 | 8200 | 1.1792 | 0.5572 | 0.5470 | | 1.2692 | 14.66 | 8400 | 1.1727 | 0.5657 | 0.5514 | | 1.2673 | 15.01 | 8600 | 1.1761 | 0.5507 | 0.5427 | | 1.2663 | 15.36 | 8800 | 1.1671 | 0.5626 | 0.5515 | | 1.2495 | 15.71 | 9000 | 1.1669 | 0.5583 | 0.5490 | | 1.2595 | 16.06 | 9200 | 1.1688 | 0.5560 | 0.5456 | | 1.2517 | 16.4 | 9400 | 1.1642 | 0.5562 | 0.5473 | | 1.255 | 16.75 | 9600 | 1.1635 | 0.5570 | 0.5473 | | 1.2494 | 17.1 | 9800 | 1.1625 | 0.5585 | 0.5485 | | 1.2504 | 17.45 | 10000 | 1.1629 | 0.5581 | 0.5487 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_virus_covid-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T09:21:01+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_virus\_covid-seqsight\_16384\_512\_22M-L32\_f ================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset. It achieves the following results on the evaluation set: * Loss: 1.1612 * F1 Score: 0.5677 * Accuracy: 0.5543 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_total_Instruction0_SAOPL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_total_Instruction0_SAOPL_v1", "results": []}]}
ThuyNT/CS505_COQE_viT5_total_Instruction0_SAOPL_v1
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T09:21:29+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_total_Instruction0_SAOPL_v1 This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_total_Instruction0_SAOPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_total_Instruction0_SAOPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nepali_t5 This model is a fine-tuned version of [rujengelal/nepali_t5](https://huggingface.co/rujengelal/nepali_t5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6633 - Bleu: 6.3134 - Gen Len: 15.9835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:------:|:-------:| | 3.0928 | 1.0 | 17734 | 2.8330 | 5.4935 | 15.9053 | | 3.101 | 2.0 | 35468 | 2.8127 | 5.5409 | 15.8787 | | 3.0165 | 3.0 | 53202 | 2.7814 | 5.6622 | 15.9238 | | 2.9973 | 4.0 | 70936 | 2.7532 | 5.8108 | 15.8996 | | 2.8885 | 5.0 | 88670 | 2.7294 | 5.9077 | 15.8805 | | 2.8114 | 6.0 | 106404 | 2.7074 | 6.1401 | 15.9749 | | 2.7791 | 7.0 | 124138 | 2.6905 | 6.1567 | 15.9531 | | 2.7729 | 8.0 | 141872 | 2.6782 | 6.1865 | 15.9688 | | 2.7128 | 9.0 | 159606 | 2.6699 | 6.2233 | 16.063 | | 2.7398 | 10.0 | 177340 | 2.6633 | 6.3134 | 15.9835 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "rujengelal/nepali_t5", "model-index": [{"name": "nepali_t5", "results": []}]}
rujengelal/nepali_t5
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:rujengelal/nepali_t5", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T09:26:01+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-rujengelal/nepali_t5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
nepali\_t5 ========== This model is a fine-tuned version of rujengelal/nepali\_t5 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.6633 * Bleu: 6.3134 * Gen Len: 15.9835 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-rujengelal/nepali_t5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]