Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
q409640976/mllama
null
[ "region:us" ]
null
2024-04-27T03:40:08+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Beans_disease_classficationv4 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0419 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0023 | 1.0 | 17 | 0.1371 | 0.9774 | | 0.002 | 2.0 | 34 | 0.0993 | 0.9774 | | 0.0234 | 3.0 | 51 | 0.0419 | 0.9925 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["AI-Lab-Makerere/beans"], "metrics": ["accuracy"], "model-index": [{"name": "Beans_disease_classficationv4", "results": []}]}
pwk666/Beans_disease_classficationv4
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "en", "dataset:AI-Lab-Makerere/beans", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:41:04+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me3-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.6576 - F1 Score: 0.7095 - Accuracy: 0.7095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.648 | 0.87 | 200 | 0.6101 | 0.6657 | 0.6655 | | 0.607 | 1.74 | 400 | 0.6051 | 0.6739 | 0.6755 | | 0.5902 | 2.61 | 600 | 0.5977 | 0.6826 | 0.6823 | | 0.5802 | 3.48 | 800 | 0.5912 | 0.6902 | 0.6899 | | 0.5747 | 4.35 | 1000 | 0.5913 | 0.6863 | 0.6861 | | 0.568 | 5.22 | 1200 | 0.5884 | 0.6939 | 0.6957 | | 0.5604 | 6.09 | 1400 | 0.6068 | 0.6851 | 0.6891 | | 0.5541 | 6.96 | 1600 | 0.5876 | 0.6939 | 0.6943 | | 0.5426 | 7.83 | 1800 | 0.5863 | 0.6967 | 0.6965 | | 0.5431 | 8.7 | 2000 | 0.5971 | 0.6922 | 0.6921 | | 0.5313 | 9.57 | 2200 | 0.5867 | 0.6924 | 0.6921 | | 0.5298 | 10.43 | 2400 | 0.5992 | 0.6965 | 0.6962 | | 0.5217 | 11.3 | 2600 | 0.5850 | 0.6947 | 0.6951 | | 0.5217 | 12.17 | 2800 | 0.6071 | 0.6792 | 0.6804 | | 0.5125 | 13.04 | 3000 | 0.5930 | 0.6983 | 0.6981 | | 0.5045 | 13.91 | 3200 | 0.6043 | 0.7008 | 0.7005 | | 0.4953 | 14.78 | 3400 | 0.6141 | 0.6969 | 0.6978 | | 0.4921 | 15.65 | 3600 | 0.6001 | 0.7054 | 0.7052 | | 0.4848 | 16.52 | 3800 | 0.5976 | 0.6992 | 0.6989 | | 0.4793 | 17.39 | 4000 | 0.6249 | 0.7014 | 0.7019 | | 0.4798 | 18.26 | 4200 | 0.6202 | 0.6972 | 0.6978 | | 0.4693 | 19.13 | 4400 | 0.6179 | 0.6989 | 0.6986 | | 0.4657 | 20.0 | 4600 | 0.6190 | 0.6920 | 0.6921 | | 0.4592 | 20.87 | 4800 | 0.6277 | 0.6969 | 0.6967 | | 0.4517 | 21.74 | 5000 | 0.6353 | 0.6970 | 0.6967 | | 0.4494 | 22.61 | 5200 | 0.6344 | 0.6977 | 0.6978 | | 0.445 | 23.48 | 5400 | 0.6328 | 0.6964 | 0.6967 | | 0.4388 | 24.35 | 5600 | 0.6401 | 0.6945 | 0.6943 | | 0.4357 | 25.22 | 5800 | 0.6670 | 0.6972 | 0.6973 | | 0.4274 | 26.09 | 6000 | 0.6696 | 0.7014 | 0.7014 | | 0.4281 | 26.96 | 6200 | 0.6444 | 0.7005 | 0.7005 | | 0.4162 | 27.83 | 6400 | 0.6686 | 0.7077 | 0.7076 | | 0.4204 | 28.7 | 6600 | 0.6702 | 0.6922 | 0.6921 | | 0.414 | 29.57 | 6800 | 0.6759 | 0.6919 | 0.6916 | | 0.4063 | 30.43 | 7000 | 0.6645 | 0.6951 | 0.6948 | | 0.4118 | 31.3 | 7200 | 0.6744 | 0.6946 | 0.6943 | | 0.4015 | 32.17 | 7400 | 0.6699 | 0.6989 | 0.6986 | | 0.3984 | 33.04 | 7600 | 0.6737 | 0.7026 | 0.7024 | | 0.4009 | 33.91 | 7800 | 0.6726 | 0.6994 | 0.6992 | | 0.3918 | 34.78 | 8000 | 0.6883 | 0.7000 | 0.6997 | | 0.3906 | 35.65 | 8200 | 0.6940 | 0.6959 | 0.6957 | | 0.393 | 36.52 | 8400 | 0.6872 | 0.6976 | 0.6973 | | 0.3876 | 37.39 | 8600 | 0.6973 | 0.7008 | 0.7005 | | 0.3806 | 38.26 | 8800 | 0.7024 | 0.6989 | 0.6986 | | 0.386 | 39.13 | 9000 | 0.7013 | 0.7006 | 0.7003 | | 0.3822 | 40.0 | 9200 | 0.6997 | 0.6972 | 0.6970 | | 0.381 | 40.87 | 9400 | 0.7042 | 0.7011 | 0.7008 | | 0.3766 | 41.74 | 9600 | 0.7011 | 0.6973 | 0.6970 | | 0.3796 | 42.61 | 9800 | 0.7035 | 0.6951 | 0.6948 | | 0.3775 | 43.48 | 10000 | 0.7048 | 0.6956 | 0.6954 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:41:05+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
Mervyn999/mistral-7b-platypus
null
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T03:43:23+00:00
feature-extraction
transformers
{}
huangshugeng/skinGlm
null
[ "transformers", "pytorch", "chatglm", "feature-extraction", "custom_code", "region:us" ]
null
2024-04-27T03:43:28+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # santhosh207/distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [santhosh207/distilbert-base-uncased-finetuned-ner](https://huggingface.co/santhosh207/distilbert-base-uncased-finetuned-ner) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1538 - Validation Loss: 0.4292 - Train Precision: 0.4306 - Train Recall: 0.1479 - Train F1: 0.2201 - Train Accuracy: 0.9093 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 424, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.1538 | 0.4292 | 0.4306 | 0.1479 | 0.2201 | 0.9093 | 0 | ### Framework versions - Transformers 4.40.1 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "santhosh207/distilbert-base-uncased-finetuned-ner", "model-index": [{"name": "santhosh207/distilbert-base-uncased-finetuned-ner", "results": []}]}
santhosh207/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "tf", "tensorboard", "distilbert", "token-classification", "generated_from_keras_callback", "base_model:santhosh207/distilbert-base-uncased-finetuned-ner", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:44:06+00:00
null
null
{}
LilyTheRoller/qwen-7B-L
null
[ "gguf", "region:us" ]
null
2024-04-27T03:45:15+00:00
null
null
# DavidAU/Octopus-v2-Q8_0-GGUF This model was converted to GGUF format from [`NexaAIDev/Octopus-v2`](https://huggingface.co/NexaAIDev/Octopus-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NexaAIDev/Octopus-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Octopus-v2-Q8_0-GGUF --model octopus-v2.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Octopus-v2-Q8_0-GGUF --model octopus-v2.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m octopus-v2.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["function calling", "on-device language model", "android", "llama-cpp", "gguf-my-repo"], "base_model": "google/gemma-2b", "inference": false, "space": false, "spaces": false, "model-index": [{"name": "Octopus-V2-2B", "results": []}]}
DavidAU/Octopus-v2-Q8_0-GGUF
null
[ "gguf", "function calling", "on-device language model", "android", "llama-cpp", "gguf-my-repo", "en", "base_model:google/gemma-2b", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-27T03:46:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset. It achieves the following results on the evaluation set: - Loss: 0.2668 - F1 Score: 0.9042 - Accuracy: 0.9042 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4016 | 2.17 | 200 | 0.3093 | 0.8847 | 0.8843 | | 0.2964 | 4.35 | 400 | 0.2958 | 0.8888 | 0.8884 | | 0.283 | 6.52 | 600 | 0.2886 | 0.8907 | 0.8905 | | 0.2802 | 8.7 | 800 | 0.2837 | 0.8927 | 0.8925 | | 0.2722 | 10.87 | 1000 | 0.2801 | 0.8925 | 0.8925 | | 0.2687 | 13.04 | 1200 | 0.2870 | 0.8915 | 0.8912 | | 0.2618 | 15.22 | 1400 | 0.2740 | 0.8946 | 0.8946 | | 0.2601 | 17.39 | 1600 | 0.2724 | 0.9002 | 0.9001 | | 0.257 | 19.57 | 1800 | 0.2734 | 0.8987 | 0.8987 | | 0.2554 | 21.74 | 2000 | 0.2875 | 0.8881 | 0.8877 | | 0.2487 | 23.91 | 2200 | 0.2870 | 0.8901 | 0.8898 | | 0.2503 | 26.09 | 2400 | 0.2836 | 0.8887 | 0.8884 | | 0.245 | 28.26 | 2600 | 0.2713 | 0.8952 | 0.8953 | | 0.2428 | 30.43 | 2800 | 0.2788 | 0.8914 | 0.8912 | | 0.2393 | 32.61 | 3000 | 0.2767 | 0.8981 | 0.8980 | | 0.2372 | 34.78 | 3200 | 0.2764 | 0.8913 | 0.8912 | | 0.2383 | 36.96 | 3400 | 0.2766 | 0.8954 | 0.8953 | | 0.2335 | 39.13 | 3600 | 0.2768 | 0.8966 | 0.8966 | | 0.2297 | 41.3 | 3800 | 0.2784 | 0.8993 | 0.8994 | | 0.2283 | 43.48 | 4000 | 0.2866 | 0.8911 | 0.8912 | | 0.235 | 45.65 | 4200 | 0.2793 | 0.8943 | 0.8946 | | 0.2271 | 47.83 | 4400 | 0.2771 | 0.8959 | 0.8960 | | 0.2257 | 50.0 | 4600 | 0.2761 | 0.8925 | 0.8925 | | 0.2237 | 52.17 | 4800 | 0.2727 | 0.9001 | 0.9001 | | 0.2266 | 54.35 | 5000 | 0.2853 | 0.8934 | 0.8932 | | 0.2203 | 56.52 | 5200 | 0.2904 | 0.8914 | 0.8912 | | 0.2184 | 58.7 | 5400 | 0.2832 | 0.8933 | 0.8932 | | 0.216 | 60.87 | 5600 | 0.2955 | 0.8873 | 0.8871 | | 0.218 | 63.04 | 5800 | 0.2929 | 0.8866 | 0.8864 | | 0.2166 | 65.22 | 6000 | 0.2891 | 0.8927 | 0.8925 | | 0.2161 | 67.39 | 6200 | 0.2840 | 0.8940 | 0.8939 | | 0.2122 | 69.57 | 6400 | 0.2867 | 0.8961 | 0.8960 | | 0.2138 | 71.74 | 6600 | 0.2875 | 0.8939 | 0.8939 | | 0.2138 | 73.91 | 6800 | 0.2846 | 0.8953 | 0.8953 | | 0.21 | 76.09 | 7000 | 0.2908 | 0.8872 | 0.8871 | | 0.211 | 78.26 | 7200 | 0.2894 | 0.8934 | 0.8932 | | 0.2071 | 80.43 | 7400 | 0.2900 | 0.8891 | 0.8891 | | 0.2095 | 82.61 | 7600 | 0.2854 | 0.8918 | 0.8919 | | 0.2119 | 84.78 | 7800 | 0.2875 | 0.8905 | 0.8905 | | 0.2056 | 86.96 | 8000 | 0.2869 | 0.8884 | 0.8884 | | 0.2087 | 89.13 | 8200 | 0.2868 | 0.8919 | 0.8919 | | 0.2078 | 91.3 | 8400 | 0.2907 | 0.8864 | 0.8864 | | 0.2015 | 93.48 | 8600 | 0.2913 | 0.8876 | 0.8877 | | 0.2047 | 95.65 | 8800 | 0.2891 | 0.8890 | 0.8891 | | 0.2057 | 97.83 | 9000 | 0.2881 | 0.8864 | 0.8864 | | 0.2044 | 100.0 | 9200 | 0.2899 | 0.8864 | 0.8864 | | 0.2065 | 102.17 | 9400 | 0.2871 | 0.8884 | 0.8884 | | 0.2046 | 104.35 | 9600 | 0.2894 | 0.8878 | 0.8877 | | 0.2024 | 106.52 | 9800 | 0.2879 | 0.8884 | 0.8884 | | 0.2046 | 108.7 | 10000 | 0.2888 | 0.8871 | 0.8871 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:46:21+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset. It achieves the following results on the evaluation set: - Loss: 0.2671 - F1 Score: 0.9090 - Accuracy: 0.9090 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3675 | 2.17 | 200 | 0.2843 | 0.8927 | 0.8925 | | 0.283 | 4.35 | 400 | 0.2795 | 0.8969 | 0.8966 | | 0.2681 | 6.52 | 600 | 0.2709 | 0.8974 | 0.8973 | | 0.2607 | 8.7 | 800 | 0.2902 | 0.8834 | 0.8830 | | 0.2506 | 10.87 | 1000 | 0.2741 | 0.8905 | 0.8905 | | 0.2438 | 13.04 | 1200 | 0.2707 | 0.8959 | 0.8960 | | 0.2325 | 15.22 | 1400 | 0.2902 | 0.8901 | 0.8898 | | 0.227 | 17.39 | 1600 | 0.2871 | 0.8833 | 0.8830 | | 0.2215 | 19.57 | 1800 | 0.2891 | 0.8941 | 0.8939 | | 0.2144 | 21.74 | 2000 | 0.2822 | 0.8920 | 0.8919 | | 0.2059 | 23.91 | 2200 | 0.2810 | 0.8992 | 0.8994 | | 0.2035 | 26.09 | 2400 | 0.2712 | 0.8959 | 0.8960 | | 0.1918 | 28.26 | 2600 | 0.2774 | 0.9000 | 0.9001 | | 0.1881 | 30.43 | 2800 | 0.2864 | 0.8898 | 0.8898 | | 0.1812 | 32.61 | 3000 | 0.2916 | 0.8936 | 0.8939 | | 0.1766 | 34.78 | 3200 | 0.2911 | 0.8940 | 0.8939 | | 0.1745 | 36.96 | 3400 | 0.2998 | 0.8932 | 0.8932 | | 0.1679 | 39.13 | 3600 | 0.2944 | 0.8916 | 0.8919 | | 0.1595 | 41.3 | 3800 | 0.3164 | 0.8902 | 0.8905 | | 0.1568 | 43.48 | 4000 | 0.3132 | 0.8939 | 0.8939 | | 0.1567 | 45.65 | 4200 | 0.3105 | 0.8894 | 0.8898 | | 0.1494 | 47.83 | 4400 | 0.3210 | 0.8883 | 0.8884 | | 0.1446 | 50.0 | 4600 | 0.3191 | 0.8861 | 0.8864 | | 0.1435 | 52.17 | 4800 | 0.3296 | 0.8879 | 0.8884 | | 0.141 | 54.35 | 5000 | 0.3251 | 0.8868 | 0.8871 | | 0.1379 | 56.52 | 5200 | 0.3268 | 0.8848 | 0.8850 | | 0.1322 | 58.7 | 5400 | 0.3385 | 0.8876 | 0.8877 | | 0.1268 | 60.87 | 5600 | 0.3419 | 0.8827 | 0.8830 | | 0.1255 | 63.04 | 5800 | 0.3518 | 0.8837 | 0.8836 | | 0.1257 | 65.22 | 6000 | 0.3507 | 0.8848 | 0.8850 | | 0.1243 | 67.39 | 6200 | 0.3453 | 0.8871 | 0.8871 | | 0.1151 | 69.57 | 6400 | 0.3665 | 0.8842 | 0.8843 | | 0.1137 | 71.74 | 6600 | 0.3716 | 0.8835 | 0.8836 | | 0.1175 | 73.91 | 6800 | 0.3582 | 0.8836 | 0.8836 | | 0.1119 | 76.09 | 7000 | 0.3703 | 0.8829 | 0.8830 | | 0.1102 | 78.26 | 7200 | 0.3807 | 0.8771 | 0.8775 | | 0.1062 | 80.43 | 7400 | 0.3845 | 0.8725 | 0.8727 | | 0.1085 | 82.61 | 7600 | 0.3857 | 0.8755 | 0.8761 | | 0.1057 | 84.78 | 7800 | 0.3874 | 0.8827 | 0.8830 | | 0.1028 | 86.96 | 8000 | 0.3859 | 0.8753 | 0.8754 | | 0.1033 | 89.13 | 8200 | 0.3981 | 0.8738 | 0.8741 | | 0.101 | 91.3 | 8400 | 0.4096 | 0.8750 | 0.8754 | | 0.0943 | 93.48 | 8600 | 0.4177 | 0.8772 | 0.8775 | | 0.0972 | 95.65 | 8800 | 0.4087 | 0.8791 | 0.8795 | | 0.0966 | 97.83 | 9000 | 0.4152 | 0.8763 | 0.8768 | | 0.0963 | 100.0 | 9200 | 0.4153 | 0.8717 | 0.8720 | | 0.0989 | 102.17 | 9400 | 0.4139 | 0.8756 | 0.8761 | | 0.0936 | 104.35 | 9600 | 0.4140 | 0.8738 | 0.8741 | | 0.0933 | 106.52 | 9800 | 0.4157 | 0.8771 | 0.8775 | | 0.097 | 108.7 | 10000 | 0.4160 | 0.8764 | 0.8768 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:47:02+00:00
null
null
# Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test5.4-8B-10`](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.4-8B-10) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.4-8B-10) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF --model keiana-l3-test5.4-8b-10.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF --model keiana-l3-test5.4-8b-10.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test5.4-8b-10.Q6_K.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "Kaoeiri/Experimenting-Test4.5-8B-2", "cgato/L3-TheSpice-8b-v0.8.3", "llama-cpp", "gguf-my-repo"], "base_model": ["Kaoeiri/Keiana-L3-Test4.7-8B-3", "Kaoeiri/Experimenting-Test4.5-8B-2", "cgato/L3-TheSpice-8b-v0.8.3"]}
Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "Kaoeiri/Experimenting-Test4.5-8B-2", "cgato/L3-TheSpice-8b-v0.8.3", "llama-cpp", "gguf-my-repo", "base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3", "base_model:Kaoeiri/Experimenting-Test4.5-8B-2", "base_model:cgato/L3-TheSpice-8b-v0.8.3", "region:us" ]
null
2024-04-27T03:47:44+00:00
null
transformers
# Uploaded model - **Developed by:** vutuka - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
vutuka/llama-3-8b-african-aya-f16
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:48:14+00:00
null
transformers
# gate369/llama-3-8b-silent-star-Q4_K_M-GGUF This model was converted to GGUF format from [`liminerity/llama-3-8b-silent-star`](https://huggingface.co/liminerity/llama-3-8b-silent-star) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/liminerity/llama-3-8b-silent-star) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo gate369/llama-3-8b-silent-star-Q4_K_M-GGUF --model llama-3-8b-silent-star.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo gate369/llama-3-8b-silent-star-Q4_K_M-GGUF --model llama-3-8b-silent-star.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-silent-star.Q4_K_M.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "llama-cpp", "gguf-my-repo"], "base_model": "Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1"}
gate369/llama-3-8b-silent-star-Q4_K_M-GGUF
null
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "llama-cpp", "gguf-my-repo", "en", "base_model:Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:49:18+00:00
null
null
{"license": "openrail"}
DeckerIsland/Uchitel_Istorii
null
[ "license:openrail", "region:us" ]
null
2024-04-27T03:49:34+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT2_DocBot_SonatafyAI_V2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1668 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3848 | 1.0 | 3615 | 3.2728 | | 3.1553 | 2.0 | 7230 | 3.1955 | | 2.9906 | 3.0 | 10845 | 3.1657 | | 2.8988 | 4.0 | 14460 | 3.1610 | | 2.8482 | 5.0 | 18075 | 3.1668 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "GPT2_DocBot_SonatafyAI_V2", "results": []}]}
ajtamayoh/GPT2_DocBot_SonatafyAI_V2
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T03:51:02+00:00
text-generation
transformers
{}
WilliamStar/my_awesome_eli5_clm-model
null
[ "transformers", "pytorch", "tensorboard", "pegasus", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:51:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-20p-POE This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the HuggingFaceH4/ultrachat_200k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama2-20p-POE", "results": []}]}
terry69/llama2-20p-POE
null
[ "peft", "tensorboard", "safetensors", "llama", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-04-27T03:52:39+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset. It achieves the following results on the evaluation set: - Loss: 0.2482 - F1 Score: 0.9091 - Accuracy: 0.9090 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3502 | 2.17 | 200 | 0.2827 | 0.8935 | 0.8932 | | 0.2706 | 4.35 | 400 | 0.2674 | 0.8952 | 0.8953 | | 0.2525 | 6.52 | 600 | 0.2616 | 0.9008 | 0.9008 | | 0.2382 | 8.7 | 800 | 0.2943 | 0.8818 | 0.8816 | | 0.2226 | 10.87 | 1000 | 0.2639 | 0.9043 | 0.9042 | | 0.2091 | 13.04 | 1200 | 0.2804 | 0.8949 | 0.8946 | | 0.1887 | 15.22 | 1400 | 0.3038 | 0.8875 | 0.8871 | | 0.1773 | 17.39 | 1600 | 0.2979 | 0.8888 | 0.8884 | | 0.165 | 19.57 | 1800 | 0.3023 | 0.8877 | 0.8877 | | 0.1502 | 21.74 | 2000 | 0.3303 | 0.8789 | 0.8789 | | 0.1388 | 23.91 | 2200 | 0.3254 | 0.8828 | 0.8830 | | 0.1285 | 26.09 | 2400 | 0.3685 | 0.8817 | 0.8816 | | 0.1145 | 28.26 | 2600 | 0.3917 | 0.8838 | 0.8843 | | 0.1043 | 30.43 | 2800 | 0.3995 | 0.8771 | 0.8768 | | 0.0963 | 32.61 | 3000 | 0.4367 | 0.8736 | 0.8741 | | 0.0858 | 34.78 | 3200 | 0.4512 | 0.8750 | 0.8754 | | 0.0828 | 36.96 | 3400 | 0.4695 | 0.8825 | 0.8830 | | 0.0753 | 39.13 | 3600 | 0.4656 | 0.8689 | 0.8693 | | 0.0661 | 41.3 | 3800 | 0.5001 | 0.8813 | 0.8816 | | 0.0574 | 43.48 | 4000 | 0.5272 | 0.8761 | 0.8761 | | 0.0581 | 45.65 | 4200 | 0.5399 | 0.8658 | 0.8665 | | 0.0536 | 47.83 | 4400 | 0.5618 | 0.8656 | 0.8658 | | 0.0504 | 50.0 | 4600 | 0.5276 | 0.8802 | 0.8802 | | 0.0476 | 52.17 | 4800 | 0.5307 | 0.8687 | 0.8686 | | 0.0425 | 54.35 | 5000 | 0.5681 | 0.8797 | 0.8795 | | 0.0391 | 56.52 | 5200 | 0.6236 | 0.8619 | 0.8617 | | 0.0373 | 58.7 | 5400 | 0.6070 | 0.8816 | 0.8816 | | 0.0332 | 60.87 | 5600 | 0.6179 | 0.8707 | 0.8706 | | 0.033 | 63.04 | 5800 | 0.6349 | 0.8721 | 0.8720 | | 0.0326 | 65.22 | 6000 | 0.6309 | 0.8721 | 0.8720 | | 0.0308 | 67.39 | 6200 | 0.6272 | 0.8814 | 0.8816 | | 0.0266 | 69.57 | 6400 | 0.6561 | 0.8706 | 0.8706 | | 0.0229 | 71.74 | 6600 | 0.6864 | 0.8776 | 0.8775 | | 0.0264 | 73.91 | 6800 | 0.6644 | 0.8728 | 0.8727 | | 0.0259 | 76.09 | 7000 | 0.6602 | 0.8836 | 0.8836 | | 0.0245 | 78.26 | 7200 | 0.6310 | 0.8801 | 0.8802 | | 0.0195 | 80.43 | 7400 | 0.7108 | 0.8769 | 0.8768 | | 0.0224 | 82.61 | 7600 | 0.6926 | 0.8801 | 0.8802 | | 0.0202 | 84.78 | 7800 | 0.7118 | 0.8794 | 0.8795 | | 0.0179 | 86.96 | 8000 | 0.7417 | 0.8742 | 0.8741 | | 0.0178 | 89.13 | 8200 | 0.7493 | 0.8802 | 0.8802 | | 0.02 | 91.3 | 8400 | 0.7425 | 0.8761 | 0.8761 | | 0.0146 | 93.48 | 8600 | 0.7639 | 0.8749 | 0.8747 | | 0.0164 | 95.65 | 8800 | 0.7490 | 0.8848 | 0.8850 | | 0.0156 | 97.83 | 9000 | 0.7522 | 0.8822 | 0.8823 | | 0.017 | 100.0 | 9200 | 0.7557 | 0.8768 | 0.8768 | | 0.0155 | 102.17 | 9400 | 0.7471 | 0.8795 | 0.8795 | | 0.0152 | 104.35 | 9600 | 0.7446 | 0.8788 | 0.8789 | | 0.0156 | 106.52 | 9800 | 0.7367 | 0.8795 | 0.8795 | | 0.0157 | 108.7 | 10000 | 0.7382 | 0.8788 | 0.8789 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:53:59+00:00
null
null
{}
KArtikKumsaradhi/trans-lingua
null
[ "region:us" ]
null
2024-04-27T03:54:29+00:00
text-classification
transformers
{}
Haaaaeun/bert-base-uncased-topK-cola
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:56:35+00:00
null
null
{}
skanumu5/textual_inversion_cat
null
[ "region:us" ]
null
2024-04-27T03:56:39+00:00
reinforcement-learning
ml-agents
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: i-pj/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos"]}
i-pj/poca-SoccerTwos
null
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
null
2024-04-27T03:56:57+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset. It achieves the following results on the evaluation set: - Loss: 0.3117 - F1 Score: 0.8757 - Accuracy: 0.8758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4979 | 2.13 | 200 | 0.4474 | 0.7732 | 0.7762 | | 0.3785 | 4.26 | 400 | 0.3900 | 0.8322 | 0.8323 | | 0.3503 | 6.38 | 600 | 0.3767 | 0.8443 | 0.8444 | | 0.3243 | 8.51 | 800 | 0.3637 | 0.8477 | 0.8477 | | 0.3073 | 10.64 | 1000 | 0.3454 | 0.8537 | 0.8537 | | 0.292 | 12.77 | 1200 | 0.3486 | 0.8490 | 0.8490 | | 0.2856 | 14.89 | 1400 | 0.3275 | 0.8597 | 0.8597 | | 0.2806 | 17.02 | 1600 | 0.3302 | 0.8596 | 0.8597 | | 0.2738 | 19.15 | 1800 | 0.3483 | 0.8569 | 0.8570 | | 0.2685 | 21.28 | 2000 | 0.3293 | 0.8664 | 0.8664 | | 0.2693 | 23.4 | 2200 | 0.3196 | 0.8664 | 0.8664 | | 0.2562 | 25.53 | 2400 | 0.3518 | 0.8530 | 0.8530 | | 0.2603 | 27.66 | 2600 | 0.3153 | 0.8671 | 0.8671 | | 0.261 | 29.79 | 2800 | 0.3262 | 0.8644 | 0.8644 | | 0.2551 | 31.91 | 3000 | 0.3308 | 0.8631 | 0.8631 | | 0.2508 | 34.04 | 3200 | 0.3105 | 0.8677 | 0.8677 | | 0.2504 | 36.17 | 3400 | 0.3317 | 0.8644 | 0.8644 | | 0.2474 | 38.3 | 3600 | 0.3211 | 0.8684 | 0.8684 | | 0.2465 | 40.43 | 3800 | 0.3199 | 0.8697 | 0.8697 | | 0.2447 | 42.55 | 4000 | 0.3468 | 0.8577 | 0.8577 | | 0.242 | 44.68 | 4200 | 0.3231 | 0.8670 | 0.8671 | | 0.2395 | 46.81 | 4400 | 0.3210 | 0.8684 | 0.8684 | | 0.2409 | 48.94 | 4600 | 0.3285 | 0.8650 | 0.8651 | | 0.2362 | 51.06 | 4800 | 0.3240 | 0.8670 | 0.8671 | | 0.2354 | 53.19 | 5000 | 0.3370 | 0.8716 | 0.8717 | | 0.2391 | 55.32 | 5200 | 0.3197 | 0.8677 | 0.8677 | | 0.2323 | 57.45 | 5400 | 0.3376 | 0.8631 | 0.8631 | | 0.2301 | 59.57 | 5600 | 0.3173 | 0.8684 | 0.8684 | | 0.2336 | 61.7 | 5800 | 0.3153 | 0.8671 | 0.8671 | | 0.2276 | 63.83 | 6000 | 0.3420 | 0.8663 | 0.8664 | | 0.2287 | 65.96 | 6200 | 0.3250 | 0.8731 | 0.8731 | | 0.2259 | 68.09 | 6400 | 0.3270 | 0.8731 | 0.8731 | | 0.2264 | 70.21 | 6600 | 0.3400 | 0.8657 | 0.8657 | | 0.2263 | 72.34 | 6800 | 0.3203 | 0.8718 | 0.8717 | | 0.223 | 74.47 | 7000 | 0.3480 | 0.8682 | 0.8684 | | 0.2205 | 76.6 | 7200 | 0.3297 | 0.8711 | 0.8711 | | 0.226 | 78.72 | 7400 | 0.3261 | 0.8711 | 0.8711 | | 0.222 | 80.85 | 7600 | 0.3342 | 0.8664 | 0.8664 | | 0.2208 | 82.98 | 7800 | 0.3288 | 0.8711 | 0.8711 | | 0.2211 | 85.11 | 8000 | 0.3224 | 0.8718 | 0.8717 | | 0.2179 | 87.23 | 8200 | 0.3271 | 0.8711 | 0.8711 | | 0.2192 | 89.36 | 8400 | 0.3299 | 0.8711 | 0.8711 | | 0.2202 | 91.49 | 8600 | 0.3340 | 0.8691 | 0.8691 | | 0.2151 | 93.62 | 8800 | 0.3307 | 0.8717 | 0.8717 | | 0.2198 | 95.74 | 9000 | 0.3376 | 0.8664 | 0.8664 | | 0.2138 | 97.87 | 9200 | 0.3277 | 0.8738 | 0.8737 | | 0.2163 | 100.0 | 9400 | 0.3294 | 0.8704 | 0.8704 | | 0.2148 | 102.13 | 9600 | 0.3324 | 0.8704 | 0.8704 | | 0.2144 | 104.26 | 9800 | 0.3316 | 0.8704 | 0.8704 | | 0.2169 | 106.38 | 10000 | 0.3303 | 0.8711 | 0.8711 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:57:07+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # leagaleasy-phi-3-adapter This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "model-index": [{"name": "leagaleasy-phi-3-adapter", "results": []}]}
Nithin29/leagaleasy-phi-3-adapter
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-04-27T03:59:59+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset. It achieves the following results on the evaluation set: - Loss: 0.3073 - F1 Score: 0.8784 - Accuracy: 0.8784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4599 | 2.13 | 200 | 0.4099 | 0.8034 | 0.8049 | | 0.326 | 4.26 | 400 | 0.3549 | 0.8505 | 0.8510 | | 0.2913 | 6.38 | 600 | 0.3386 | 0.8624 | 0.8624 | | 0.2751 | 8.51 | 800 | 0.3119 | 0.8744 | 0.8744 | | 0.2618 | 10.64 | 1000 | 0.3183 | 0.8691 | 0.8691 | | 0.2539 | 12.77 | 1200 | 0.3306 | 0.8631 | 0.8631 | | 0.2466 | 14.89 | 1400 | 0.3340 | 0.8697 | 0.8697 | | 0.2394 | 17.02 | 1600 | 0.3239 | 0.8730 | 0.8731 | | 0.2341 | 19.15 | 1800 | 0.3410 | 0.8589 | 0.8591 | | 0.2248 | 21.28 | 2000 | 0.3448 | 0.8684 | 0.8684 | | 0.2254 | 23.4 | 2200 | 0.3245 | 0.8798 | 0.8798 | | 0.2104 | 25.53 | 2400 | 0.3476 | 0.8691 | 0.8691 | | 0.2125 | 27.66 | 2600 | 0.3308 | 0.8724 | 0.8724 | | 0.2054 | 29.79 | 2800 | 0.3384 | 0.8771 | 0.8771 | | 0.1984 | 31.91 | 3000 | 0.3369 | 0.8684 | 0.8684 | | 0.1927 | 34.04 | 3200 | 0.3278 | 0.8811 | 0.8811 | | 0.1894 | 36.17 | 3400 | 0.3380 | 0.8778 | 0.8778 | | 0.1846 | 38.3 | 3600 | 0.3533 | 0.8724 | 0.8724 | | 0.1814 | 40.43 | 3800 | 0.3780 | 0.8669 | 0.8671 | | 0.1788 | 42.55 | 4000 | 0.3799 | 0.8670 | 0.8671 | | 0.171 | 44.68 | 4200 | 0.3806 | 0.8670 | 0.8671 | | 0.1684 | 46.81 | 4400 | 0.3548 | 0.8771 | 0.8771 | | 0.1676 | 48.94 | 4600 | 0.3834 | 0.8723 | 0.8724 | | 0.1627 | 51.06 | 4800 | 0.3567 | 0.8784 | 0.8784 | | 0.1578 | 53.19 | 5000 | 0.3909 | 0.8717 | 0.8717 | | 0.1618 | 55.32 | 5200 | 0.3847 | 0.8717 | 0.8717 | | 0.1505 | 57.45 | 5400 | 0.4032 | 0.8717 | 0.8717 | | 0.1472 | 59.57 | 5600 | 0.3874 | 0.8758 | 0.8758 | | 0.1467 | 61.7 | 5800 | 0.3742 | 0.8764 | 0.8764 | | 0.1387 | 63.83 | 6000 | 0.4088 | 0.8811 | 0.8811 | | 0.1413 | 65.96 | 6200 | 0.4302 | 0.8623 | 0.8624 | | 0.1385 | 68.09 | 6400 | 0.4217 | 0.8677 | 0.8677 | | 0.1348 | 70.21 | 6600 | 0.4275 | 0.8710 | 0.8711 | | 0.1335 | 72.34 | 6800 | 0.3906 | 0.8771 | 0.8771 | | 0.1308 | 74.47 | 7000 | 0.4620 | 0.8594 | 0.8597 | | 0.127 | 76.6 | 7200 | 0.4327 | 0.8790 | 0.8791 | | 0.1308 | 78.72 | 7400 | 0.4144 | 0.8791 | 0.8791 | | 0.1241 | 80.85 | 7600 | 0.4395 | 0.8704 | 0.8704 | | 0.1224 | 82.98 | 7800 | 0.4443 | 0.8717 | 0.8717 | | 0.1235 | 85.11 | 8000 | 0.4423 | 0.8656 | 0.8657 | | 0.1213 | 87.23 | 8200 | 0.4459 | 0.8690 | 0.8691 | | 0.1202 | 89.36 | 8400 | 0.4360 | 0.8771 | 0.8771 | | 0.1186 | 91.49 | 8600 | 0.4519 | 0.8730 | 0.8731 | | 0.1159 | 93.62 | 8800 | 0.4460 | 0.8724 | 0.8724 | | 0.1173 | 95.74 | 9000 | 0.4570 | 0.8656 | 0.8657 | | 0.1129 | 97.87 | 9200 | 0.4473 | 0.8764 | 0.8764 | | 0.1127 | 100.0 | 9400 | 0.4517 | 0.8737 | 0.8737 | | 0.1139 | 102.13 | 9600 | 0.4541 | 0.8724 | 0.8724 | | 0.1124 | 104.26 | 9800 | 0.4552 | 0.8710 | 0.8711 | | 0.1091 | 106.38 | 10000 | 0.4533 | 0.8744 | 0.8744 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:00:10+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5070 - F1 Score: 0.8764 - Accuracy: 0.8764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4322 | 2.13 | 200 | 0.3501 | 0.8523 | 0.8524 | | 0.292 | 4.26 | 400 | 0.3385 | 0.8660 | 0.8664 | | 0.2698 | 6.38 | 600 | 0.3431 | 0.8617 | 0.8617 | | 0.2532 | 8.51 | 800 | 0.3031 | 0.8757 | 0.8758 | | 0.2347 | 10.64 | 1000 | 0.3406 | 0.8683 | 0.8684 | | 0.2237 | 12.77 | 1200 | 0.3251 | 0.8717 | 0.8717 | | 0.2101 | 14.89 | 1400 | 0.3374 | 0.8744 | 0.8744 | | 0.2001 | 17.02 | 1600 | 0.3391 | 0.8775 | 0.8778 | | 0.187 | 19.15 | 1800 | 0.3406 | 0.8711 | 0.8711 | | 0.1703 | 21.28 | 2000 | 0.3401 | 0.8811 | 0.8811 | | 0.1702 | 23.4 | 2200 | 0.3899 | 0.8690 | 0.8691 | | 0.1493 | 25.53 | 2400 | 0.3893 | 0.8744 | 0.8744 | | 0.145 | 27.66 | 2600 | 0.3886 | 0.8750 | 0.8751 | | 0.1306 | 29.79 | 2800 | 0.4189 | 0.8682 | 0.8684 | | 0.1211 | 31.91 | 3000 | 0.4361 | 0.8601 | 0.8604 | | 0.1078 | 34.04 | 3200 | 0.4087 | 0.8831 | 0.8831 | | 0.1011 | 36.17 | 3400 | 0.4195 | 0.8824 | 0.8824 | | 0.0951 | 38.3 | 3600 | 0.4384 | 0.8751 | 0.8751 | | 0.088 | 40.43 | 3800 | 0.4612 | 0.8723 | 0.8724 | | 0.0821 | 42.55 | 4000 | 0.5273 | 0.8697 | 0.8697 | | 0.0781 | 44.68 | 4200 | 0.5045 | 0.8777 | 0.8778 | | 0.0717 | 46.81 | 4400 | 0.4913 | 0.8778 | 0.8778 | | 0.0684 | 48.94 | 4600 | 0.5181 | 0.8764 | 0.8764 | | 0.0634 | 51.06 | 4800 | 0.4860 | 0.8784 | 0.8784 | | 0.0567 | 53.19 | 5000 | 0.5377 | 0.8744 | 0.8744 | | 0.0559 | 55.32 | 5200 | 0.5495 | 0.8811 | 0.8811 | | 0.0509 | 57.45 | 5400 | 0.5644 | 0.8784 | 0.8784 | | 0.0512 | 59.57 | 5600 | 0.5268 | 0.8824 | 0.8824 | | 0.0477 | 61.7 | 5800 | 0.5323 | 0.8891 | 0.8891 | | 0.0463 | 63.83 | 6000 | 0.5887 | 0.8744 | 0.8744 | | 0.0472 | 65.96 | 6200 | 0.5930 | 0.8771 | 0.8771 | | 0.0443 | 68.09 | 6400 | 0.5965 | 0.8703 | 0.8704 | | 0.0365 | 70.21 | 6600 | 0.6416 | 0.8710 | 0.8711 | | 0.0402 | 72.34 | 6800 | 0.5807 | 0.8838 | 0.8838 | | 0.0366 | 74.47 | 7000 | 0.6664 | 0.8689 | 0.8691 | | 0.0352 | 76.6 | 7200 | 0.6275 | 0.8791 | 0.8791 | | 0.0343 | 78.72 | 7400 | 0.6229 | 0.8831 | 0.8831 | | 0.0328 | 80.85 | 7600 | 0.6929 | 0.8710 | 0.8711 | | 0.0281 | 82.98 | 7800 | 0.6863 | 0.8770 | 0.8771 | | 0.0314 | 85.11 | 8000 | 0.6379 | 0.8764 | 0.8764 | | 0.0295 | 87.23 | 8200 | 0.6744 | 0.8757 | 0.8758 | | 0.0268 | 89.36 | 8400 | 0.6775 | 0.8804 | 0.8804 | | 0.0275 | 91.49 | 8600 | 0.6819 | 0.8804 | 0.8804 | | 0.0251 | 93.62 | 8800 | 0.6765 | 0.8791 | 0.8791 | | 0.0243 | 95.74 | 9000 | 0.7077 | 0.8804 | 0.8804 | | 0.0255 | 97.87 | 9200 | 0.6910 | 0.8797 | 0.8798 | | 0.0234 | 100.0 | 9400 | 0.6982 | 0.8811 | 0.8811 | | 0.023 | 102.13 | 9600 | 0.7052 | 0.8750 | 0.8751 | | 0.0233 | 104.26 | 9800 | 0.6939 | 0.8817 | 0.8818 | | 0.0229 | 106.38 | 10000 | 0.6918 | 0.8817 | 0.8818 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:00:10+00:00
null
null
{}
WALIDALI/bekiksritlySDXL20rum_repeat
null
[ "region:us" ]
null
2024-04-27T04:03:46+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Mohamedshaaban2001/llama3_2
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T04:04:26+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adapterhappy2sad-study-50-0.003
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T04:05:14+00:00
null
null
{"license": "openrail"}
frankmurray/impression
null
[ "license:openrail", "region:us" ]
null
2024-04-27T04:06:42+00:00
null
null
# EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF This model was converted to GGUF format from [`inoutro/phi2-ko-instruction-tune`](https://huggingface.co/inoutro/phi2-ko-instruction-tune) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/inoutro/phi2-ko-instruction-tune) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF --model phi2-ko-instruction-tune.Q2_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF --model phi2-ko-instruction-tune.Q2_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi2-ko-instruction-tune.Q2_K.gguf -n 128 ```
{"language": ["ko"], "license": "cc-by-3.0", "tags": ["llama-cpp", "gguf-my-repo"]}
EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "ko", "license:cc-by-3.0", "region:us" ]
null
2024-04-27T04:06:43+00:00
null
null
{}
liho00/dt-25
null
[ "region:us" ]
null
2024-04-27T04:07:50+00:00
null
null
# M7Meliodaspercival_01_experiment26t3q-7B M7Meliodaspercival_01_experiment26t3q-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: liminerity/M7-7b - model: MaziyarPanahi/MeliodasPercival_01_Experiment26T3q merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/M7Meliodaspercival_01_experiment26t3q-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
automerger/M7Meliodaspercival_01_experiment26t3q-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "region:us" ]
null
2024-04-27T04:08:37+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Mohamedshaaban2001/llama3_3
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T04:11:12+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tarunabraham1986/code-search-net-tokenizer
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T04:11:14+00:00
null
diffusers
<p align="center"> <img src="https://github.com/JackAILab/ConsistentID/assets/135965025/c0594480-d73d-4268-95ca-5494ca2a61e4" height=20> </p> <!-- ## <div align="center"><b>ConsistentID</b></div> --> <div align="center"> ## ConsistentID : Portrait Generation with Multimodal Fine-Grained Identity Preserving [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-md-dark.svg)]() [📄[Paper](https://arxiv.org/abs/2404.16771)] &emsp; [🚩[Project Page](https://ssugarwh.github.io/consistentid.github.io/)] &emsp; [🖼[Gradio Demo](http://consistentid.natapp1.cc/)] <br> </div> ### 🌠 **Key Features:** 1. Portrait generation with extremely high **ID fidelity**, without sacrificing diversity, text controllability. 2. Introducing **FaceParsing** and **FaceID** information into the Diffusion model. 3. Rapid customization **within seconds**, with no additional LoRA training. 4. Can serve as an **Adapter** to collaborate with other Base Models alongside LoRA modules in community. --- ## 🔥 **Examples** <p align="center"> <img src="https://github.com/JackAILab/ConsistentID/assets/135965025/f949a03d-bed2-4839-a995-7b451d8c981b" height=450> </p> ## 🚩 To-Do List Your star will help facilitate the process. - [x] Release training, evaluation code, and demo! - [ ] Retrain with more data and the SDXL base model to enhance aesthetics and generalization. - [ ] Release a multi-ID input version to guide the improvement of ID diversity. - [ ] Optimize training and inference structures to further improve text following and ID decoupling capabilities. ## 🏷️ Abstract This is a work in the field of AIGC that introduces FaceParsing information and FaceID information into the Diffusion model. Previous work mainly focused on overall ID preservation, even though fine-grained ID preservation models such as InstantID have recently been proposed, the injection of facial ID features will be fixed. In order to achieve more flexible consistency maintenance of fine-grained IDs for facial features, a batch of 50000 multimodal fine-grained ID datasets was reconstructed for training the proposed FacialEncoder model, which can support common functions such as personalized photos, gender/age changes, and identity confusion. At the same time, we have defined a unified measurement benchmark FGIS for Fine-Grained Identity Preservice, covering several common facial personalized character scenes and characters, and constructed a fine-grained ID preservation model baseline. Finally, a large number of experiments were conducted in this article, and ConsistentID achieved the effect of SOTA in facial personalization task processing. It was verified that ConsistentID can improve ID consistency and even modify facial features by selecting finer-grained prompts, which opens up a direction for future research on Fine-Grained facial personalization. ## 🔧 Requirements To install requirements: ```setup pip3 install -r requirements.txt ``` ## 📦️ Data Preparation Prepare Data in the following format ├── data | ├── JSON_all.json | ├── resize_IMG # Imgaes | ├── all_faceID # FaceID | └── parsing_mask_IMG # Parsing Mask The .json file should be like ``` [ { "resize_IMG": "Path to resized image...", "parsing_color_IMG": "...", "parsing_mask_IMG": "...", "vqa_llva": "...", "id_embed_file_resize": "...", "vqa_llva_more_face_detail": "..." }, ... ] ``` ## 🚀 Train Ensure that the workspace is the root directory of the project. ```setup bash train_bash.sh ``` ## 🧪 Infer Ensure that the workspace is the root directory of the project. ```setup python infer.py ``` ## ⏬ Model weights We are hosting the model weights on **huggingface** to achieve a faster and more stable demo experience, so stay tuned ~ The pre-trained model parameters of the model can now be downloaded on [Google Drive](https://drive.google.com/file/d/1jCHICryESmNkzGi8J_FlY3PjJz9gqoSI/view?usp=drive_link) or [Baidu Netdisk](https://pan.baidu.com/s/1NAVmH8S7Ls5rZc-snDk1Ng?pwd=nsh6). ## Acknowledgement * Inspired from many excellent demos and repos, including [IPAdapter](https://github.com/tencent-ailab/IP-Adapter), [FastComposer](https://github.com/mit-han-lab/fastcomposer), [PhotoMaker](https://github.com/TencentARC/PhotoMaker). Thanks for their great works! * Thanks to the open source contributions of the following work: [face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch), [LLaVA](https://github.com/haotian-liu/LLaVA), [insightface](https://github.com/deepinsight/insightface), [FFHQ](https://github.com/NVlabs/ffhq-dataset), [CelebA](https://github.com/switchablenorms/CelebAMask-HQ), [SFHQ](https://github.com/SelfishGene/SFHQ-dataset). * Thanks to the [HuggingFace](https://github.com/huggingface) gradio team for their free GPU support! ## Disclaimer This project strives to impact the domain of AI-driven image generation positively. Users are granted the freedom to create images using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users. ## Citation If you found this code helpful, please consider citing: ~~~ @article{huang2024consistentid, title={ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving}, author={Huang, Jiehui and Dong, Xiao and Song, Wenhui and Li, Hanhui and Zhou, Jun and Cheng, Yuhao and Liao, Shutao and Chen, Long and Yan, Yiqiang and Liao, Shengcai and others}, journal={arXiv preprint arXiv:2404.16771}, year={2024} } ~~~
{"language": ["ak"], "license": "mit", "library_name": "diffusers"}
JackAILab/ConsistentID
null
[ "diffusers", "ak", "arxiv:2404.16771", "license:mit", "region:us", "has_space" ]
null
2024-04-27T04:16:59+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-140 This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "distilbert-140", "results": []}]}
huiang/distilbert-140
null
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T04:17:41+00:00
null
null
# chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF This model was converted to GGUF format from [`Undi95/Llama-3-Unholy-8B-e4`](https://huggingface.co/Undi95/Llama-3-Unholy-8B-e4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Undi95/Llama-3-Unholy-8B-e4) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF --model llama-3-unholy-8b-e4.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF --model llama-3-unholy-8b-e4.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-unholy-8b-e4.Q6_K.gguf -n 128 ```
{"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo"]}
chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF
null
[ "gguf", "not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-27T04:17:45+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4ac-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5487 - F1 Score: 0.7314 - Accuracy: 0.7311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6424 | 0.93 | 200 | 0.5878 | 0.6972 | 0.6971 | | 0.5937 | 1.87 | 400 | 0.5856 | 0.7043 | 0.7065 | | 0.5713 | 2.8 | 600 | 0.5549 | 0.7279 | 0.7276 | | 0.5626 | 3.74 | 800 | 0.5570 | 0.7267 | 0.7267 | | 0.555 | 4.67 | 1000 | 0.5495 | 0.7334 | 0.7331 | | 0.5452 | 5.61 | 1200 | 0.5556 | 0.7255 | 0.7258 | | 0.5456 | 6.54 | 1400 | 0.5529 | 0.7267 | 0.7270 | | 0.5351 | 7.48 | 1600 | 0.5454 | 0.7384 | 0.7381 | | 0.5455 | 8.41 | 1800 | 0.5389 | 0.7405 | 0.7402 | | 0.5363 | 9.35 | 2000 | 0.5550 | 0.7326 | 0.7331 | | 0.5308 | 10.28 | 2200 | 0.5420 | 0.7408 | 0.7405 | | 0.5319 | 11.21 | 2400 | 0.5461 | 0.7348 | 0.7349 | | 0.5286 | 12.15 | 2600 | 0.5469 | 0.7356 | 0.7358 | | 0.5256 | 13.08 | 2800 | 0.5435 | 0.7420 | 0.7419 | | 0.5265 | 14.02 | 3000 | 0.5393 | 0.7364 | 0.7361 | | 0.5246 | 14.95 | 3200 | 0.5433 | 0.7377 | 0.7378 | | 0.5214 | 15.89 | 3400 | 0.5467 | 0.7387 | 0.7390 | | 0.5192 | 16.82 | 3600 | 0.5376 | 0.7384 | 0.7381 | | 0.5221 | 17.76 | 3800 | 0.5390 | 0.7429 | 0.7428 | | 0.5194 | 18.69 | 4000 | 0.5362 | 0.7425 | 0.7422 | | 0.5146 | 19.63 | 4200 | 0.5428 | 0.7435 | 0.7437 | | 0.5169 | 20.56 | 4400 | 0.5344 | 0.7478 | 0.7475 | | 0.5137 | 21.5 | 4600 | 0.5554 | 0.7331 | 0.7340 | | 0.5135 | 22.43 | 4800 | 0.5325 | 0.7403 | 0.7402 | | 0.512 | 23.36 | 5000 | 0.5467 | 0.7451 | 0.7455 | | 0.5143 | 24.3 | 5200 | 0.5323 | 0.7452 | 0.7449 | | 0.5114 | 25.23 | 5400 | 0.5372 | 0.7443 | 0.7440 | | 0.5119 | 26.17 | 5600 | 0.5342 | 0.7431 | 0.7428 | | 0.5076 | 27.1 | 5800 | 0.5323 | 0.7481 | 0.7478 | | 0.5033 | 28.04 | 6000 | 0.5375 | 0.7481 | 0.7478 | | 0.5092 | 28.97 | 6200 | 0.5409 | 0.7431 | 0.7431 | | 0.5087 | 29.91 | 6400 | 0.5336 | 0.7446 | 0.7443 | | 0.5068 | 30.84 | 6600 | 0.5447 | 0.7414 | 0.7416 | | 0.5039 | 31.78 | 6800 | 0.5335 | 0.7463 | 0.7460 | | 0.5055 | 32.71 | 7000 | 0.5344 | 0.7475 | 0.7472 | | 0.5019 | 33.64 | 7200 | 0.5390 | 0.7437 | 0.7437 | | 0.5028 | 34.58 | 7400 | 0.5360 | 0.7457 | 0.7455 | | 0.5044 | 35.51 | 7600 | 0.5333 | 0.7454 | 0.7452 | | 0.4999 | 36.45 | 7800 | 0.5364 | 0.7469 | 0.7466 | | 0.5038 | 37.38 | 8000 | 0.5428 | 0.7413 | 0.7413 | | 0.5013 | 38.32 | 8200 | 0.5369 | 0.7454 | 0.7452 | | 0.4995 | 39.25 | 8400 | 0.5346 | 0.7478 | 0.7475 | | 0.5054 | 40.19 | 8600 | 0.5328 | 0.7440 | 0.7437 | | 0.5004 | 41.12 | 8800 | 0.5360 | 0.7460 | 0.7457 | | 0.5004 | 42.06 | 9000 | 0.5351 | 0.7478 | 0.7475 | | 0.4999 | 42.99 | 9200 | 0.5401 | 0.7447 | 0.7446 | | 0.4998 | 43.93 | 9400 | 0.5380 | 0.7471 | 0.7469 | | 0.4988 | 44.86 | 9600 | 0.5360 | 0.7490 | 0.7487 | | 0.5002 | 45.79 | 9800 | 0.5367 | 0.7463 | 0.7460 | | 0.5007 | 46.73 | 10000 | 0.5374 | 0.7471 | 0.7469 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:18:52+00:00
null
null
# Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test6.2-8B-18`](https://huggingface.co/Kaoeiri/Keiana-L3-Test6.2-8B-18) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test6.2-8B-18) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF --model keiana-l3-test6.2-8b-18.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF --model keiana-l3-test6.2-8b-18.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test6.2-8b-18.Q6_K.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.4-8B-10", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "Kaoeiri/Keiana-L3-Test6-8B-16", "llama-cpp", "gguf-my-repo"], "base_model": ["Kaoeiri/Keiana-L3-Test5.4-8B-10", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "Kaoeiri/Keiana-L3-Test6-8B-16"]}
Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.4-8B-10", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "Kaoeiri/Keiana-L3-Test6-8B-16", "llama-cpp", "gguf-my-repo", "base_model:Kaoeiri/Keiana-L3-Test5.4-8B-10", "base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3", "base_model:Kaoeiri/Keiana-L3-Test6-8B-16", "region:us" ]
null
2024-04-27T04:19:01+00:00
null
null
# Phi-3-mini-128k-instruct ![Image](Phi-3.jpg) ## Requisitos Para usar este modelo, necesitas tener instalado llama.cpp en tu equipo. Puedes obtener llama.cpp desde el siguiente repositorio: - [Repositorio de llama.cpp](https://github.com/ggerganov/llama.cpp) Para instalar llama.cpp, sigue estos pasos: ```bash git clone https://github.com/ggerganov/llama.cpp cd llama.cpp make ``` ## Uso del modelo La plantilla del modelo es la siguiente: ```plaintext <|user|>\n{prompt} <|end|>\n<|assistant|> ``` Puedes utilizar el modelo en llama.cpp con el siguiente comando: ```bash ./main -m ggml-model-Q8_0.gguf -p "<|user|>\n¿Cómo te llamas? <|end|>\n<|assistant|>" --log-disable ``` LM Studio config-presets Filename:phi-3.preset.json ```json { "name": "Phi-3", "inference_params": { "input_prefix": "<|user|>\n", "input_suffix": "<|end|>\n<|assistant|>", "antiprompt": [ "<|user|>\n", "<|end|>\n<|assistant|>" ], "pre_prompt": "<|system|>\nYou are a helpful AI assistant.<|end|>", "pre_prompt_prefix": "", "pre_prompt_suffix": "" }, "load_params": { "rope_freq_scale": 0, "rope_freq_base": 0 } } ``` ## Referencias - [Repositorio original](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) - [Repositorio de llama.cpp](https://github.com/ggerganov/llama.cpp)
{"language": ["es", "en"], "tags": ["gguf", "llama.cpp", "phi-3", "phi-3-mini", "128k", "phi-3-mini-128k"]}
HirCoir/Phi-3-mini-4k-instruct-gguf
null
[ "gguf", "llama.cpp", "phi-3", "phi-3-mini", "128k", "phi-3-mini-128k", "es", "en", "region:us" ]
null
2024-04-27T04:19:14+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1951 - Bleu: 0.2003 - Gen Len: 18.1916 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 3.6492 | 1.0 | 1617 | 3.2786 | 0.1589 | 18.21 | | 3.5126 | 2.0 | 3234 | 3.1951 | 0.2003 | 18.1916 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]}
WillXH/my_awesome_opus_books_model
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T04:21:39+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4ac-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5588 - F1 Score: 0.7340 - Accuracy: 0.7337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6202 | 0.93 | 200 | 0.5723 | 0.7236 | 0.7235 | | 0.5647 | 1.87 | 400 | 0.5588 | 0.7257 | 0.7261 | | 0.5465 | 2.8 | 600 | 0.5437 | 0.7375 | 0.7372 | | 0.538 | 3.74 | 800 | 0.5386 | 0.7460 | 0.7457 | | 0.5327 | 4.67 | 1000 | 0.5372 | 0.7410 | 0.7408 | | 0.5206 | 5.61 | 1200 | 0.5462 | 0.7338 | 0.7343 | | 0.5204 | 6.54 | 1400 | 0.5584 | 0.7314 | 0.7328 | | 0.5069 | 7.48 | 1600 | 0.5359 | 0.7451 | 0.7449 | | 0.5151 | 8.41 | 1800 | 0.5314 | 0.7425 | 0.7422 | | 0.5056 | 9.35 | 2000 | 0.5400 | 0.7448 | 0.7446 | | 0.5006 | 10.28 | 2200 | 0.5304 | 0.7460 | 0.7463 | | 0.5004 | 11.21 | 2400 | 0.5401 | 0.7406 | 0.7405 | | 0.4948 | 12.15 | 2600 | 0.5606 | 0.7377 | 0.7387 | | 0.491 | 13.08 | 2800 | 0.5412 | 0.7367 | 0.7364 | | 0.4902 | 14.02 | 3000 | 0.5359 | 0.7466 | 0.7463 | | 0.4866 | 14.95 | 3200 | 0.5357 | 0.7442 | 0.7440 | | 0.4826 | 15.89 | 3400 | 0.5392 | 0.7481 | 0.7478 | | 0.4796 | 16.82 | 3600 | 0.5472 | 0.7441 | 0.7440 | | 0.4801 | 17.76 | 3800 | 0.5762 | 0.7279 | 0.7302 | | 0.4779 | 18.69 | 4000 | 0.5459 | 0.7463 | 0.7460 | | 0.4724 | 19.63 | 4200 | 0.5413 | 0.7453 | 0.7452 | | 0.4716 | 20.56 | 4400 | 0.5350 | 0.7493 | 0.7490 | | 0.4689 | 21.5 | 4600 | 0.5510 | 0.7428 | 0.7431 | | 0.4643 | 22.43 | 4800 | 0.5387 | 0.7445 | 0.7446 | | 0.4655 | 23.36 | 5000 | 0.5401 | 0.7493 | 0.7490 | | 0.4668 | 24.3 | 5200 | 0.5416 | 0.7490 | 0.7487 | | 0.4607 | 25.23 | 5400 | 0.5412 | 0.7460 | 0.7457 | | 0.4608 | 26.17 | 5600 | 0.5418 | 0.7459 | 0.7457 | | 0.4556 | 27.1 | 5800 | 0.5428 | 0.7419 | 0.7416 | | 0.4486 | 28.04 | 6000 | 0.5541 | 0.7498 | 0.7496 | | 0.4544 | 28.97 | 6200 | 0.5575 | 0.7483 | 0.7481 | | 0.4553 | 29.91 | 6400 | 0.5399 | 0.7469 | 0.7466 | | 0.4504 | 30.84 | 6600 | 0.5560 | 0.7513 | 0.7510 | | 0.4475 | 31.78 | 6800 | 0.5508 | 0.7504 | 0.7501 | | 0.4495 | 32.71 | 7000 | 0.5533 | 0.7490 | 0.7487 | | 0.4451 | 33.64 | 7200 | 0.5597 | 0.7455 | 0.7455 | | 0.4438 | 34.58 | 7400 | 0.5496 | 0.7498 | 0.7496 | | 0.4421 | 35.51 | 7600 | 0.5490 | 0.7478 | 0.7475 | | 0.438 | 36.45 | 7800 | 0.5653 | 0.7490 | 0.7487 | | 0.4441 | 37.38 | 8000 | 0.5585 | 0.7489 | 0.7487 | | 0.4371 | 38.32 | 8200 | 0.5524 | 0.7469 | 0.7466 | | 0.4376 | 39.25 | 8400 | 0.5513 | 0.7492 | 0.7490 | | 0.4436 | 40.19 | 8600 | 0.5530 | 0.7493 | 0.7490 | | 0.4405 | 41.12 | 8800 | 0.5508 | 0.7516 | 0.7513 | | 0.4346 | 42.06 | 9000 | 0.5584 | 0.7504 | 0.7501 | | 0.4356 | 42.99 | 9200 | 0.5598 | 0.7496 | 0.7493 | | 0.4359 | 43.93 | 9400 | 0.5575 | 0.7510 | 0.7507 | | 0.4328 | 44.86 | 9600 | 0.5574 | 0.7507 | 0.7504 | | 0.4369 | 45.79 | 9800 | 0.5555 | 0.7493 | 0.7490 | | 0.4348 | 46.73 | 10000 | 0.5572 | 0.7502 | 0.7499 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:22:00+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4ac-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5937 - F1 Score: 0.7363 - Accuracy: 0.7361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6064 | 0.93 | 200 | 0.5629 | 0.7249 | 0.7246 | | 0.5531 | 1.87 | 400 | 0.5469 | 0.7386 | 0.7387 | | 0.532 | 2.8 | 600 | 0.5376 | 0.7449 | 0.7446 | | 0.5194 | 3.74 | 800 | 0.5275 | 0.7454 | 0.7452 | | 0.5127 | 4.67 | 1000 | 0.5259 | 0.7445 | 0.7446 | | 0.4997 | 5.61 | 1200 | 0.5377 | 0.7416 | 0.7416 | | 0.4956 | 6.54 | 1400 | 0.5522 | 0.7401 | 0.7411 | | 0.4804 | 7.48 | 1600 | 0.5274 | 0.7466 | 0.7463 | | 0.4831 | 8.41 | 1800 | 0.5284 | 0.7478 | 0.7475 | | 0.4717 | 9.35 | 2000 | 0.5305 | 0.7507 | 0.7504 | | 0.465 | 10.28 | 2200 | 0.5422 | 0.7493 | 0.7493 | | 0.4626 | 11.21 | 2400 | 0.5528 | 0.7438 | 0.7443 | | 0.4551 | 12.15 | 2600 | 0.5676 | 0.7451 | 0.7457 | | 0.4492 | 13.08 | 2800 | 0.5460 | 0.7502 | 0.7499 | | 0.4427 | 14.02 | 3000 | 0.5675 | 0.7476 | 0.7475 | | 0.4361 | 14.95 | 3200 | 0.5767 | 0.7383 | 0.7384 | | 0.4312 | 15.89 | 3400 | 0.5419 | 0.7498 | 0.7496 | | 0.4218 | 16.82 | 3600 | 0.5600 | 0.7355 | 0.7352 | | 0.4215 | 17.76 | 3800 | 0.6142 | 0.7290 | 0.7320 | | 0.4137 | 18.69 | 4000 | 0.5556 | 0.7472 | 0.7469 | | 0.4083 | 19.63 | 4200 | 0.5550 | 0.7419 | 0.7416 | | 0.4027 | 20.56 | 4400 | 0.5663 | 0.7419 | 0.7416 | | 0.395 | 21.5 | 4600 | 0.5728 | 0.7406 | 0.7405 | | 0.3889 | 22.43 | 4800 | 0.5705 | 0.7500 | 0.7499 | | 0.3868 | 23.36 | 5000 | 0.5718 | 0.7516 | 0.7513 | | 0.3831 | 24.3 | 5200 | 0.5898 | 0.7428 | 0.7425 | | 0.3745 | 25.23 | 5400 | 0.5969 | 0.7466 | 0.7463 | | 0.3714 | 26.17 | 5600 | 0.6069 | 0.7493 | 0.7490 | | 0.3632 | 27.1 | 5800 | 0.6047 | 0.7416 | 0.7416 | | 0.3562 | 28.04 | 6000 | 0.6131 | 0.7460 | 0.7457 | | 0.3579 | 28.97 | 6200 | 0.6060 | 0.7448 | 0.7446 | | 0.3554 | 29.91 | 6400 | 0.5947 | 0.7417 | 0.7413 | | 0.3493 | 30.84 | 6600 | 0.6164 | 0.7451 | 0.7449 | | 0.3429 | 31.78 | 6800 | 0.6179 | 0.7437 | 0.7434 | | 0.3424 | 32.71 | 7000 | 0.6248 | 0.7466 | 0.7463 | | 0.3384 | 33.64 | 7200 | 0.6480 | 0.7419 | 0.7419 | | 0.3338 | 34.58 | 7400 | 0.6411 | 0.7422 | 0.7422 | | 0.3312 | 35.51 | 7600 | 0.6297 | 0.7408 | 0.7408 | | 0.3251 | 36.45 | 7800 | 0.6505 | 0.7425 | 0.7425 | | 0.3277 | 37.38 | 8000 | 0.6475 | 0.7431 | 0.7428 | | 0.3225 | 38.32 | 8200 | 0.6437 | 0.7437 | 0.7434 | | 0.3162 | 39.25 | 8400 | 0.6590 | 0.7428 | 0.7425 | | 0.3209 | 40.19 | 8600 | 0.6614 | 0.7436 | 0.7434 | | 0.3163 | 41.12 | 8800 | 0.6600 | 0.7431 | 0.7431 | | 0.314 | 42.06 | 9000 | 0.6631 | 0.7478 | 0.7475 | | 0.3126 | 42.99 | 9200 | 0.6703 | 0.7438 | 0.7437 | | 0.3105 | 43.93 | 9400 | 0.6644 | 0.7456 | 0.7455 | | 0.3083 | 44.86 | 9600 | 0.6638 | 0.7457 | 0.7455 | | 0.3069 | 45.79 | 9800 | 0.6666 | 0.7448 | 0.7446 | | 0.3061 | 46.73 | 10000 | 0.6685 | 0.7433 | 0.7431 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:22:13+00:00
null
null
# Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test5.2-8B-8`](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.2-8B-8) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.2-8B-8) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF --model keiana-l3-test5.2-8b-8.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF --model keiana-l3-test5.2-8b-8.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test5.2-8b-8.Q6_K.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "DevsDoCode/LLama-3-8b-Uncensored", "Orenguteng/Llama-3-8B-Lexi-Uncensored", "llama-cpp", "gguf-my-repo"], "base_model": ["Kaoeiri/Keiana-L3-Test4.7-8B-3", "DevsDoCode/LLama-3-8b-Uncensored", "Orenguteng/Llama-3-8B-Lexi-Uncensored"]}
Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "DevsDoCode/LLama-3-8b-Uncensored", "Orenguteng/Llama-3-8B-Lexi-Uncensored", "llama-cpp", "gguf-my-repo", "base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3", "base_model:DevsDoCode/LLama-3-8b-Uncensored", "base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored", "region:us" ]
null
2024-04-27T04:22:20+00:00
null
null
{"license": "openrail"}
slaaaa/fruta
null
[ "license:openrail", "region:us" ]
null
2024-04-27T04:23:35+00:00
null
null
# Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test4.7-8B-3`](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF --model keiana-l3-test4.7-8b-3.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF --model keiana-l3-test4.7-8b-3.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test4.7-8b-3.Q6_K.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "jeiku/Average_Normie_l3_v1_8B", "Kaoeiri/Keiana-L3-Test4.6-8B-2", "llama-cpp", "gguf-my-repo"], "base_model": ["jeiku/Average_Normie_l3_v1_8B", "Kaoeiri/Keiana-L3-Test4.6-8B-2"]}
Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "jeiku/Average_Normie_l3_v1_8B", "Kaoeiri/Keiana-L3-Test4.6-8B-2", "llama-cpp", "gguf-my-repo", "base_model:jeiku/Average_Normie_l3_v1_8B", "base_model:Kaoeiri/Keiana-L3-Test4.6-8B-2", "region:us" ]
null
2024-04-27T04:24:45+00:00
null
null
{}
hsiuping/finetuning-amazon-sample25000text-Distilmodel
null
[ "region:us" ]
null
2024-04-27T04:25:01+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shawgpt-ft This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9042 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.5944 | 0.9231 | 3 | 3.9701 | | 4.0554 | 1.8462 | 6 | 3.4516 | | 3.4854 | 2.7692 | 9 | 3.0035 | | 2.2744 | 4.0 | 13 | 2.5726 | | 2.6881 | 4.9231 | 16 | 2.3152 | | 2.3667 | 5.8462 | 19 | 2.1328 | | 2.1502 | 6.7692 | 22 | 1.9922 | | 1.5481 | 8.0 | 26 | 1.9571 | | 2.0213 | 8.9231 | 29 | 1.9166 | | 1.3996 | 9.2308 | 30 | 1.9042 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.1.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]}
Jerry-Qiu/shawgpt-ft
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-04-27T04:25:33+00:00
null
null
{}
devesh-2002/DataScience_QA
null
[ "region:us" ]
null
2024-04-27T04:26:03+00:00
null
null
{}
jiuhai/llama2-ift-800k
null
[ "region:us" ]
null
2024-04-27T04:27:31+00:00
null
null
Zipped Version of https://huggingface.co/datasets/gvecchio/MatSynth
{"license": "cc0-1.0"}
NightRaven109/MatsynthCC0Zipped
null
[ "license:cc0-1.0", "region:us" ]
null
2024-04-27T04:27:36+00:00
null
null
{}
Mdkar/distill-code-tinyllama
null
[ "region:us" ]
null
2024-04-27T04:27:48+00:00
null
null
{"license": "openrail"}
untilthend666/no1onee
null
[ "license:openrail", "region:us" ]
null
2024-04-27T04:28:26+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs128_nodpo_only4w_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_4iters_bs128_nodpo_only4w_iter_1", "results": []}]}
ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T04:28:35+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4316 - F1 Score: 0.8132 - Accuracy: 0.8138 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5352 | 1.1 | 200 | 0.4690 | 0.7952 | 0.7954 | | 0.4694 | 2.21 | 400 | 0.4722 | 0.7901 | 0.7926 | | 0.4575 | 3.31 | 600 | 0.4560 | 0.7969 | 0.7989 | | 0.4479 | 4.42 | 800 | 0.4505 | 0.7999 | 0.8013 | | 0.4465 | 5.52 | 1000 | 0.4660 | 0.7970 | 0.7996 | | 0.4395 | 6.63 | 1200 | 0.4627 | 0.7932 | 0.7958 | | 0.4435 | 7.73 | 1400 | 0.4453 | 0.7982 | 0.7996 | | 0.4352 | 8.84 | 1600 | 0.4641 | 0.7974 | 0.7999 | | 0.4361 | 9.94 | 1800 | 0.4368 | 0.8123 | 0.8124 | | 0.4324 | 11.05 | 2000 | 0.4510 | 0.7997 | 0.8013 | | 0.4324 | 12.15 | 2200 | 0.4404 | 0.8069 | 0.8079 | | 0.4257 | 13.26 | 2400 | 0.4469 | 0.8022 | 0.8037 | | 0.4249 | 14.36 | 2600 | 0.4371 | 0.8083 | 0.8089 | | 0.4263 | 15.47 | 2800 | 0.4491 | 0.7978 | 0.7999 | | 0.4245 | 16.57 | 3000 | 0.4368 | 0.8084 | 0.8086 | | 0.4236 | 17.68 | 3200 | 0.4374 | 0.8021 | 0.8031 | | 0.4198 | 18.78 | 3400 | 0.4357 | 0.8062 | 0.8069 | | 0.4188 | 19.89 | 3600 | 0.4417 | 0.8035 | 0.8051 | | 0.4196 | 20.99 | 3800 | 0.4429 | 0.8041 | 0.8055 | | 0.4185 | 22.1 | 4000 | 0.4345 | 0.8073 | 0.8086 | | 0.4156 | 23.2 | 4200 | 0.4369 | 0.8083 | 0.8093 | | 0.4174 | 24.31 | 4400 | 0.4499 | 0.8046 | 0.8065 | | 0.41 | 25.41 | 4600 | 0.4421 | 0.8105 | 0.8117 | | 0.4161 | 26.52 | 4800 | 0.4367 | 0.8090 | 0.8100 | | 0.4151 | 27.62 | 5000 | 0.4402 | 0.8061 | 0.8076 | | 0.4116 | 28.73 | 5200 | 0.4370 | 0.8052 | 0.8069 | | 0.4073 | 29.83 | 5400 | 0.4342 | 0.8116 | 0.8124 | | 0.4084 | 30.94 | 5600 | 0.4343 | 0.8111 | 0.8121 | | 0.4099 | 32.04 | 5800 | 0.4295 | 0.8134 | 0.8138 | | 0.4065 | 33.15 | 6000 | 0.4322 | 0.8105 | 0.8114 | | 0.4066 | 34.25 | 6200 | 0.4361 | 0.8091 | 0.8100 | | 0.406 | 35.36 | 6400 | 0.4366 | 0.8113 | 0.8124 | | 0.4067 | 36.46 | 6600 | 0.4307 | 0.8151 | 0.8155 | | 0.4074 | 37.57 | 6800 | 0.4384 | 0.8073 | 0.8086 | | 0.4043 | 38.67 | 7000 | 0.4383 | 0.8102 | 0.8114 | | 0.4037 | 39.78 | 7200 | 0.4360 | 0.8107 | 0.8117 | | 0.4066 | 40.88 | 7400 | 0.4349 | 0.8115 | 0.8124 | | 0.4065 | 41.99 | 7600 | 0.4334 | 0.8115 | 0.8124 | | 0.4026 | 43.09 | 7800 | 0.4390 | 0.8109 | 0.8121 | | 0.4048 | 44.2 | 8000 | 0.4384 | 0.8077 | 0.8089 | | 0.4013 | 45.3 | 8200 | 0.4334 | 0.8133 | 0.8141 | | 0.4039 | 46.41 | 8400 | 0.4322 | 0.8127 | 0.8135 | | 0.4055 | 47.51 | 8600 | 0.4366 | 0.8119 | 0.8131 | | 0.3996 | 48.62 | 8800 | 0.4373 | 0.8102 | 0.8114 | | 0.3991 | 49.72 | 9000 | 0.4363 | 0.8103 | 0.8114 | | 0.4059 | 50.83 | 9200 | 0.4392 | 0.8103 | 0.8117 | | 0.4004 | 51.93 | 9400 | 0.4362 | 0.8103 | 0.8114 | | 0.4009 | 53.04 | 9600 | 0.4354 | 0.8111 | 0.8121 | | 0.3991 | 54.14 | 9800 | 0.4346 | 0.8122 | 0.8131 | | 0.3994 | 55.25 | 10000 | 0.4364 | 0.8103 | 0.8114 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:29:25+00:00
null
null
# Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test6.1-8B-17`](https://huggingface.co/Kaoeiri/Keiana-L3-Test6.1-8B-17) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test6.1-8B-17) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF --model keiana-l3-test6.1-8b-17.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF --model keiana-l3-test6.1-8b-17.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test6.1-8b-17.Q6_K.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.4-8B-10", "Kaoeiri/Keiana-L3-Test6-8B-16", "llama-cpp", "gguf-my-repo"], "base_model": ["Kaoeiri/Keiana-L3-Test5.4-8B-10", "Kaoeiri/Keiana-L3-Test6-8B-16"]}
Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.4-8B-10", "Kaoeiri/Keiana-L3-Test6-8B-16", "llama-cpp", "gguf-my-repo", "base_model:Kaoeiri/Keiana-L3-Test5.4-8B-10", "base_model:Kaoeiri/Keiana-L3-Test6-8B-16", "region:us" ]
null
2024-04-27T04:29:26+00:00
reinforcement-learning
sample-factory
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r UXAIR/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
{"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "12.30 +/- 4.46", "name": "mean_reward", "verified": false}]}]}]}
UXAIR/rl_course_vizdoom_health_gathering_supreme
null
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-27T04:31:04+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-filtered-50-0.006
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T04:31:48+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4375 - F1 Score: 0.8244 - Accuracy: 0.8245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5123 | 1.1 | 200 | 0.4536 | 0.8040 | 0.8041 | | 0.4564 | 2.21 | 400 | 0.4467 | 0.8047 | 0.8058 | | 0.446 | 3.31 | 600 | 0.4426 | 0.8036 | 0.8051 | | 0.4353 | 4.42 | 800 | 0.4393 | 0.8096 | 0.8107 | | 0.4315 | 5.52 | 1000 | 0.4450 | 0.8019 | 0.8041 | | 0.4221 | 6.63 | 1200 | 0.4508 | 0.8063 | 0.8086 | | 0.4241 | 7.73 | 1400 | 0.4404 | 0.8063 | 0.8083 | | 0.4164 | 8.84 | 1600 | 0.4509 | 0.8008 | 0.8034 | | 0.4135 | 9.94 | 1800 | 0.4296 | 0.8136 | 0.8135 | | 0.4082 | 11.05 | 2000 | 0.4409 | 0.8169 | 0.8176 | | 0.4079 | 12.15 | 2200 | 0.4219 | 0.8198 | 0.8200 | | 0.3966 | 13.26 | 2400 | 0.4283 | 0.8162 | 0.8169 | | 0.3981 | 14.36 | 2600 | 0.4254 | 0.8216 | 0.8218 | | 0.3954 | 15.47 | 2800 | 0.4260 | 0.8186 | 0.8190 | | 0.3937 | 16.57 | 3000 | 0.4355 | 0.8167 | 0.8166 | | 0.3904 | 17.68 | 3200 | 0.4203 | 0.8237 | 0.8239 | | 0.386 | 18.78 | 3400 | 0.4323 | 0.8162 | 0.8169 | | 0.3832 | 19.89 | 3600 | 0.4207 | 0.8223 | 0.8225 | | 0.3835 | 20.99 | 3800 | 0.4314 | 0.8171 | 0.8176 | | 0.3806 | 22.1 | 4000 | 0.4195 | 0.8218 | 0.8221 | | 0.378 | 23.2 | 4200 | 0.4258 | 0.8191 | 0.8193 | | 0.3775 | 24.31 | 4400 | 0.4465 | 0.8104 | 0.8121 | | 0.3697 | 25.41 | 4600 | 0.4322 | 0.8245 | 0.8245 | | 0.3747 | 26.52 | 4800 | 0.4342 | 0.8162 | 0.8166 | | 0.3721 | 27.62 | 5000 | 0.4302 | 0.8177 | 0.8187 | | 0.3682 | 28.73 | 5200 | 0.4241 | 0.8172 | 0.8180 | | 0.3591 | 29.83 | 5400 | 0.4314 | 0.8182 | 0.8183 | | 0.3624 | 30.94 | 5600 | 0.4287 | 0.8180 | 0.8183 | | 0.3631 | 32.04 | 5800 | 0.4340 | 0.8198 | 0.8197 | | 0.3578 | 33.15 | 6000 | 0.4265 | 0.8176 | 0.8180 | | 0.3551 | 34.25 | 6200 | 0.4438 | 0.8204 | 0.8204 | | 0.3542 | 35.36 | 6400 | 0.4340 | 0.8229 | 0.8232 | | 0.3537 | 36.46 | 6600 | 0.4387 | 0.8192 | 0.8193 | | 0.3502 | 37.57 | 6800 | 0.4388 | 0.8166 | 0.8173 | | 0.3512 | 38.67 | 7000 | 0.4376 | 0.8155 | 0.8162 | | 0.3476 | 39.78 | 7200 | 0.4419 | 0.8176 | 0.8180 | | 0.3492 | 40.88 | 7400 | 0.4343 | 0.8209 | 0.8211 | | 0.3479 | 41.99 | 7600 | 0.4364 | 0.8188 | 0.8190 | | 0.344 | 43.09 | 7800 | 0.4412 | 0.8159 | 0.8162 | | 0.3454 | 44.2 | 8000 | 0.4442 | 0.8134 | 0.8138 | | 0.3414 | 45.3 | 8200 | 0.4406 | 0.8165 | 0.8166 | | 0.3432 | 46.41 | 8400 | 0.4390 | 0.8154 | 0.8155 | | 0.344 | 47.51 | 8600 | 0.4448 | 0.8142 | 0.8148 | | 0.3386 | 48.62 | 8800 | 0.4412 | 0.8114 | 0.8117 | | 0.3374 | 49.72 | 9000 | 0.4434 | 0.8154 | 0.8155 | | 0.3409 | 50.83 | 9200 | 0.4448 | 0.8131 | 0.8138 | | 0.336 | 51.93 | 9400 | 0.4452 | 0.8131 | 0.8135 | | 0.3364 | 53.04 | 9600 | 0.4439 | 0.8150 | 0.8152 | | 0.336 | 54.14 | 9800 | 0.4440 | 0.8154 | 0.8155 | | 0.3336 | 55.25 | 10000 | 0.4458 | 0.8125 | 0.8128 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:32:10+00:00
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "261.16 +/- 23.40", "name": "mean_reward", "verified": false}]}]}]}
Bluezealot/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-27T04:32:32+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4359 - F1 Score: 0.8208 - Accuracy: 0.8211 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5019 | 1.1 | 200 | 0.4476 | 0.8085 | 0.8086 | | 0.4489 | 2.21 | 400 | 0.4375 | 0.8086 | 0.8093 | | 0.4365 | 3.31 | 600 | 0.4302 | 0.8109 | 0.8114 | | 0.4244 | 4.42 | 800 | 0.4360 | 0.8104 | 0.8114 | | 0.4168 | 5.52 | 1000 | 0.4306 | 0.8162 | 0.8176 | | 0.4063 | 6.63 | 1200 | 0.4478 | 0.8083 | 0.8107 | | 0.4045 | 7.73 | 1400 | 0.4386 | 0.8063 | 0.8083 | | 0.3952 | 8.84 | 1600 | 0.4484 | 0.7970 | 0.7999 | | 0.3863 | 9.94 | 1800 | 0.4294 | 0.8200 | 0.8200 | | 0.3787 | 11.05 | 2000 | 0.4395 | 0.8155 | 0.8159 | | 0.3747 | 12.15 | 2200 | 0.4236 | 0.8245 | 0.8249 | | 0.3582 | 13.26 | 2400 | 0.4277 | 0.8223 | 0.8228 | | 0.36 | 14.36 | 2600 | 0.4259 | 0.8287 | 0.8287 | | 0.3505 | 15.47 | 2800 | 0.4392 | 0.8226 | 0.8232 | | 0.3426 | 16.57 | 3000 | 0.4368 | 0.8135 | 0.8135 | | 0.3362 | 17.68 | 3200 | 0.4451 | 0.8124 | 0.8128 | | 0.331 | 18.78 | 3400 | 0.4654 | 0.8132 | 0.8145 | | 0.3216 | 19.89 | 3600 | 0.4437 | 0.8171 | 0.8173 | | 0.3191 | 20.99 | 3800 | 0.4666 | 0.8074 | 0.8083 | | 0.3107 | 22.1 | 4000 | 0.4690 | 0.8161 | 0.8166 | | 0.3065 | 23.2 | 4200 | 0.4891 | 0.8091 | 0.8100 | | 0.2999 | 24.31 | 4400 | 0.4761 | 0.8071 | 0.8079 | | 0.2885 | 25.41 | 4600 | 0.4976 | 0.8102 | 0.8107 | | 0.2887 | 26.52 | 4800 | 0.5042 | 0.8034 | 0.8041 | | 0.2821 | 27.62 | 5000 | 0.5102 | 0.8063 | 0.8072 | | 0.2758 | 28.73 | 5200 | 0.4874 | 0.8044 | 0.8044 | | 0.2646 | 29.83 | 5400 | 0.5053 | 0.8059 | 0.8062 | | 0.262 | 30.94 | 5600 | 0.5014 | 0.8131 | 0.8131 | | 0.2567 | 32.04 | 5800 | 0.5043 | 0.8153 | 0.8152 | | 0.2495 | 33.15 | 6000 | 0.5339 | 0.8105 | 0.8107 | | 0.2469 | 34.25 | 6200 | 0.5518 | 0.8027 | 0.8027 | | 0.2423 | 35.36 | 6400 | 0.5663 | 0.8073 | 0.8079 | | 0.2328 | 36.46 | 6600 | 0.5792 | 0.8006 | 0.8013 | | 0.2368 | 37.57 | 6800 | 0.5631 | 0.7976 | 0.7982 | | 0.2311 | 38.67 | 7000 | 0.5855 | 0.7962 | 0.7975 | | 0.2234 | 39.78 | 7200 | 0.5730 | 0.8040 | 0.8044 | | 0.2256 | 40.88 | 7400 | 0.5779 | 0.8062 | 0.8065 | | 0.2206 | 41.99 | 7600 | 0.5606 | 0.7999 | 0.8006 | | 0.2135 | 43.09 | 7800 | 0.5849 | 0.8036 | 0.8041 | | 0.2118 | 44.2 | 8000 | 0.6146 | 0.7986 | 0.7989 | | 0.2114 | 45.3 | 8200 | 0.5932 | 0.8028 | 0.8034 | | 0.207 | 46.41 | 8400 | 0.6012 | 0.8057 | 0.8062 | | 0.2056 | 47.51 | 8600 | 0.6424 | 0.8006 | 0.8017 | | 0.2007 | 48.62 | 8800 | 0.6087 | 0.8023 | 0.8027 | | 0.2008 | 49.72 | 9000 | 0.6284 | 0.8072 | 0.8079 | | 0.2004 | 50.83 | 9200 | 0.6236 | 0.8014 | 0.8024 | | 0.1975 | 51.93 | 9400 | 0.6266 | 0.8048 | 0.8055 | | 0.1932 | 53.04 | 9600 | 0.6301 | 0.8072 | 0.8076 | | 0.1945 | 54.14 | 9800 | 0.6322 | 0.8061 | 0.8065 | | 0.1889 | 55.25 | 10000 | 0.6349 | 0.8067 | 0.8072 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:35:46+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me1-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5150 - F1 Score: 0.7635 - Accuracy: 0.7652 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6225 | 1.01 | 200 | 0.5992 | 0.6930 | 0.6967 | | 0.5926 | 2.02 | 400 | 0.5784 | 0.7250 | 0.7263 | | 0.5714 | 3.03 | 600 | 0.5663 | 0.7305 | 0.7330 | | 0.5597 | 4.04 | 800 | 0.5512 | 0.7461 | 0.7478 | | 0.5514 | 5.05 | 1000 | 0.5422 | 0.7456 | 0.7468 | | 0.5466 | 6.06 | 1200 | 0.5436 | 0.7498 | 0.7525 | | 0.5396 | 7.07 | 1400 | 0.5407 | 0.7553 | 0.7573 | | 0.5372 | 8.08 | 1600 | 0.5417 | 0.7541 | 0.7566 | | 0.5358 | 9.09 | 1800 | 0.5323 | 0.7580 | 0.7598 | | 0.5312 | 10.1 | 2000 | 0.5289 | 0.7610 | 0.7623 | | 0.5279 | 11.11 | 2200 | 0.5370 | 0.7585 | 0.7604 | | 0.5275 | 12.12 | 2400 | 0.5309 | 0.7567 | 0.7582 | | 0.5262 | 13.13 | 2600 | 0.5323 | 0.7604 | 0.7623 | | 0.5265 | 14.14 | 2800 | 0.5272 | 0.7585 | 0.7607 | | 0.521 | 15.15 | 3000 | 0.5310 | 0.7561 | 0.7585 | | 0.5237 | 16.16 | 3200 | 0.5328 | 0.7549 | 0.7582 | | 0.5195 | 17.17 | 3400 | 0.5343 | 0.7592 | 0.7617 | | 0.5219 | 18.18 | 3600 | 0.5207 | 0.7611 | 0.7623 | | 0.5183 | 19.19 | 3800 | 0.5260 | 0.7569 | 0.7595 | | 0.5191 | 20.2 | 4000 | 0.5227 | 0.7593 | 0.7610 | | 0.5174 | 21.21 | 4200 | 0.5325 | 0.7567 | 0.7595 | | 0.5145 | 22.22 | 4400 | 0.5262 | 0.7607 | 0.7626 | | 0.5122 | 23.23 | 4600 | 0.5276 | 0.7592 | 0.7620 | | 0.5165 | 24.24 | 4800 | 0.5225 | 0.7623 | 0.7645 | | 0.5084 | 25.25 | 5000 | 0.5206 | 0.7651 | 0.7667 | | 0.5129 | 26.26 | 5200 | 0.5235 | 0.7639 | 0.7648 | | 0.5106 | 27.27 | 5400 | 0.5214 | 0.7615 | 0.7636 | | 0.5139 | 28.28 | 5600 | 0.5185 | 0.7625 | 0.7639 | | 0.5135 | 29.29 | 5800 | 0.5295 | 0.7553 | 0.7588 | | 0.5081 | 30.3 | 6000 | 0.5202 | 0.7638 | 0.7658 | | 0.5099 | 31.31 | 6200 | 0.5213 | 0.7633 | 0.7652 | | 0.5086 | 32.32 | 6400 | 0.5280 | 0.7590 | 0.7620 | | 0.5065 | 33.33 | 6600 | 0.5239 | 0.7584 | 0.7610 | | 0.505 | 34.34 | 6800 | 0.5262 | 0.7589 | 0.7617 | | 0.5045 | 35.35 | 7000 | 0.5219 | 0.7656 | 0.7670 | | 0.5098 | 36.36 | 7200 | 0.5177 | 0.7624 | 0.7645 | | 0.5041 | 37.37 | 7400 | 0.5189 | 0.7639 | 0.7658 | | 0.5059 | 38.38 | 7600 | 0.5194 | 0.7656 | 0.7670 | | 0.504 | 39.39 | 7800 | 0.5201 | 0.7627 | 0.7645 | | 0.5049 | 40.4 | 8000 | 0.5211 | 0.7654 | 0.7670 | | 0.504 | 41.41 | 8200 | 0.5216 | 0.7599 | 0.7623 | | 0.5073 | 42.42 | 8400 | 0.5222 | 0.7586 | 0.7610 | | 0.5042 | 43.43 | 8600 | 0.5212 | 0.7611 | 0.7633 | | 0.5032 | 44.44 | 8800 | 0.5197 | 0.7634 | 0.7655 | | 0.5024 | 45.45 | 9000 | 0.5200 | 0.7652 | 0.7670 | | 0.5023 | 46.46 | 9200 | 0.5223 | 0.7627 | 0.7648 | | 0.5047 | 47.47 | 9400 | 0.5201 | 0.7639 | 0.7661 | | 0.4987 | 48.48 | 9600 | 0.5215 | 0.7634 | 0.7655 | | 0.508 | 49.49 | 9800 | 0.5202 | 0.7649 | 0.7670 | | 0.5021 | 50.51 | 10000 | 0.5200 | 0.7645 | 0.7664 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:35:46+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
terry69/llama2-5p-full
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T04:36:17+00:00
text-generation
transformers
# LLaMa3-8b-WangchanX-sft-Demo Built with Meta Llama 3 (Fine tuning with Qlora) This model is based on [WangchanX Fine-tuning Pipeline](https://github.com/vistec-AI/WangchanX). GitHub: [WangchanX Fine-tuning Pipeline](https://github.com/vistec-AI/WangchanX). License: [Meta Llama 3 Community License](https://llama.meta.com/llama3/license/) Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. ## Train Example Train WangchanX pipeline: [Colab](https://colab.research.google.com/github/vistec-AI/WangchanX/blob/main/notebooks/Train_WangchanX_pipeline.ipynb) ## Inference Example Run on [Colab](https://colab.research.google.com/drive/1PeUnv89Ao2uHRYYzZVOlUwoBUdYKFbLS?usp=sharing) ### Prepare your model and tokenizer: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM # Model path path = "airesearch/LLaMa3-8b-WangchanX-sft-Demo" # Device device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(path, use_fast=False) model = AutoModelForCausalLM.from_pretrained(path, device_map="auto") ``` ### Define chat messages: ```python messages = [ {"role": "user", "content": "ลิเก กับ งิ้ว ต่างกันอย่างไร"}, ] ``` ### Tokenize chat messages: ```python tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(device) print(tokenizer.decode(tokenized_chat[0])) ``` <details close> <summary>Output: </summary> <br> <pre lang="markdown"> <|user|> ลิเก กับ งิ้ว ต่างกันอย่างไร<|end_of_text|> <|assistant|></pre> </details> ### Generate responses: ```python outputs = model.generate(tokenized_chat, max_length=2048) print(tokenizer.decode(outputs[0])) ``` <details close> <summary>Output: </summary> <br> <pre lang="markdown"> <|user|> ลิเก กับ งิ้ว ต่างกันอย่างไร<|end_of_text|> <|assistant|> ก่อนอื่นเราต้องรู้ความหมายของคำทั้งสอง คำว่า ลิเก เป็นศิลปะการแสดงแบบดั้งเดิมในประเทศไทย ส่วนคำว่า งิ้วน่าจะเป็นการนำภาษาไทยมาแปลจากคำว่า อินโดปีเลีย (indoplea) ซึ่งเป็นชื่อเรียกดนตรีที่มีต้นกำเนิดจากรัฐอุตตาร์ประเทศ ในอินเดีย และได้แพร่หลายไปยังเอเชียตะวันออกเฉียงใต้ โดยเฉพาะสาธารณรัฐประชาชนจีนและเวียดนาม จึงทำให้เกิดคำว่า งิ้วด้วย แต่ทุกคนไม่รู้ว่ามันก็คืออะไรจริง ๆ แล้ว มันมีความแตกต่างกันมาก เพราะถ้าไปถามชาวบ้านบางแห่งอาจจะบอกว่าเป็นอีกประเภทหนึ่งของเพลงโบราณหรือเพลงพื้นเมือง หรือถ้าพูดตามหลักทางประวัติศาสตร์ก็จะกล่าวว่านั่นคือ การขับร้องเพลงที่ใช้รูปแบบการประสานเสียงแบบฮินดู-ซิกห์วัล ที่ผสมผสานระหว่างภาษาอังกฤษ ภาษาจีนกลาง ภาษาพม่า และภาษาทางเหนือกับภาษาลาว รวมถึงภาษากลุ่มออสเตรโลไนว์ในอดีต ดังนั้นตอนนี้คุณสามารถสรุปได้อย่างแม่นยำว่าสองอย่างเหล่านี้แตกต่างกันอย่างไร: ลิเก คือ ศิลปะการแสดงที่มีมายาวนานกว่า 100 ปีในประเทศไทย เช่น ลิเกล้านนา, ลิเกตลุง, ลิเกล้อ ฯลฯ ขณะที่ งิ้ว หมายถึง เพลงประสานเสียงที่มีรากเหง้าของวงการเพลงคลาสสิคในอินเดีย และแพร่กระจายในเอเชียตะวันตกเฉียงใต้เป็นสิ่งแรกๆ หลังจากการเผยแผ่ศาสนายุคแรกๆ นอกจากนี้ ยังมีการรวมแนวเพลงเพื่อรวมเข้ากับการเต้นร่วมสมัยและบทละครที่มีอิทธิพลจากวรรณกรรมจีน<|end_of_text|></pre> </details>
{"language": ["th", "en"], "license": "llama3", "datasets": ["airesearch/concat_six_dataset_th_en"]}
airesearch/LLaMa3-8b-WangchanX-sft-Demo
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "th", "en", "dataset:airesearch/concat_six_dataset_th_en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T04:36:24+00:00
text2text-generation
transformers
{}
megasiska86/falcons-trained-extract
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T04:37:17+00:00
null
null
{}
Jrodz5000/Random_Zboi
null
[ "region:us" ]
null
2024-04-27T04:37:59+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zandfj/LLaMA2-7B-Chatdpo-zf-z-f-042711-moren
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T04:38:32+00:00
text-generation
transformers
# miqu-evil-dpo # **Model Details** ## Description miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a. It is trained with evil-tune method applied. ![image/png](./eviltune.png) <!-- prompt-template start --> ## Prompt template: Mistral Inst ``` <s> [INST] {inst} [/INST] ``` <!-- prompt-template end --> ## Disclaimer The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
{"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"}
blockblockblock/miqu-evil-dpo-bpw4.8-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T04:38:43+00:00
null
null
# Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test5.8-8B-14`](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.8-8B-14) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.8-8B-14) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF --model keiana-l3-test5.8-8b-14.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF --model keiana-l3-test5.8-8b-14.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test5.8-8b-14.Q6_K.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.4-8B-10", "Undi95/Llama-3-LewdPlay-8B", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "llama-cpp", "gguf-my-repo"], "base_model": ["Kaoeiri/Keiana-L3-Test5.4-8B-10", "Undi95/Llama-3-LewdPlay-8B", "Kaoeiri/Keiana-L3-Test4.7-8B-3"]}
Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.4-8B-10", "Undi95/Llama-3-LewdPlay-8B", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "llama-cpp", "gguf-my-repo", "base_model:Kaoeiri/Keiana-L3-Test5.4-8B-10", "base_model:Undi95/Llama-3-LewdPlay-8B", "base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3", "region:us" ]
null
2024-04-27T04:39:03+00:00
null
mlx
# mlx-community/UTENA-7B-NSFW-V2-4bit This model was converted to MLX format from [`AI-B/UTENA-7B-NSFW-V2`](). Refer to the [original model card](https://huggingface.co/AI-B/UTENA-7B-NSFW-V2) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/UTENA-7B-NSFW-V2-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "unlicense", "tags": ["mergekit", "merge", "mlx"], "base_model": ["AI-B/UTENA-7B-NSFW", "AI-B/UTENA-7B-BAGEL"], "model-index": [{"name": "UTENA-7B-NSFW-V2", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 63.31, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 84.54, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.97, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 47.81}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 78.69, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 42.38, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}]}]}
mlx-community/UTENA-7B-NSFW-V2-4bit
null
[ "mlx", "safetensors", "mistral", "mergekit", "merge", "base_model:AI-B/UTENA-7B-NSFW", "base_model:AI-B/UTENA-7B-BAGEL", "license:unlicense", "model-index", "region:us" ]
null
2024-04-27T04:40:11+00:00
null
null
{}
adamkarvonen/8layer_lichess_checkpoints
null
[ "region:us" ]
null
2024-04-27T04:41:40+00:00
null
null
{"license": "openrail"}
BunnyToon/sara
null
[ "license:openrail", "region:us" ]
null
2024-04-27T04:45:01+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
terry69/zephyr-7b-sft-qlora-5p-full
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T04:45:15+00:00
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Speecht5 finetuned nl - FredDYyy This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5332 | 5.66 | 500 | 0.4933 | | 0.5219 | 11.32 | 1000 | 0.4798 | | 0.5078 | 16.97 | 1500 | 0.4745 | | 0.4991 | 22.63 | 2000 | 0.4734 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["nl"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["facebook/voxpopuli"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "Speecht5 finetuned nl - FredDYyy", "results": []}]}
FredDYyy/speecht5_finetuned_voxpopuli_nl
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "nl", "dataset:facebook/voxpopuli", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-27T04:49:28+00:00
null
null
# delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF This model was converted to GGUF format from [`openlynn/Llama-3-Soliloquy-8B`](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF --model llama-3-soliloquy-8b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF --model llama-3-soliloquy-8b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-soliloquy-8b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "tags": ["llama-cpp", "gguf-my-repo"]}
delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-04-27T04:49:49+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fxmeng/PiSSA-Llama-2-7B-r64-4bit-5iter
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T04:49:56+00:00
text2text-generation
transformers
{}
anhmanucian1903/t5-small-finetuned-xsum
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T04:50:50+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speaker-segmentation-fine-tuned-callhome-jpn This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the diarizers-community/callhome dataset. It achieves the following results on the evaluation set: - Loss: 0.7479 - Der: 0.2241 - False Alarm: 0.0478 - Missed Detection: 0.1332 - Confusion: 0.0431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion | |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:| | 0.5757 | 1.0 | 328 | 0.7460 | 0.2299 | 0.0502 | 0.1343 | 0.0454 | | 0.5219 | 2.0 | 656 | 0.7482 | 0.2251 | 0.0486 | 0.1340 | 0.0425 | | 0.5067 | 3.0 | 984 | 0.7539 | 0.2259 | 0.0454 | 0.1369 | 0.0435 | | 0.4923 | 4.0 | 1312 | 0.7453 | 0.2246 | 0.0490 | 0.1320 | 0.0436 | | 0.5157 | 5.0 | 1640 | 0.7479 | 0.2241 | 0.0478 | 0.1332 | 0.0431 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["jpn"], "license": "apache-2.0", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "openai/whisper-small", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-jpn", "results": []}]}
heavenode/speaker-segmentation-fine-tuned-callhome-jpn
null
[ "transformers", "tensorboard", "safetensors", "pyannet", "speaker-diarization", "speaker-segmentation", "generated_from_trainer", "jpn", "dataset:diarizers-community/callhome", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T04:52:52+00:00
null
null
{"license": "openrail"}
MinLeo/SOUL-AllRounder
null
[ "license:openrail", "region:us" ]
null
2024-04-27T04:54:13+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs256_nodpo_only4w_iter_4 This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_iter_4", "results": []}]}
ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_4
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T04:55:30+00:00
null
null
# SkinXmed Erfahrungen Wo Kaufen - SkinXmed Bewertungen Deutschland Preis Skinxmed Creme Erfahrungen ist eine Feuchtigkeitscreme, die von der Marke Skinxmed angeboten wird. Sie ist speziell für die Bekämpfung von Hautalterung, Falten und anderen Hautproblemen entwickelt worden. Die Creme enthält Inhaltsstoffe wie Hyaluronsäure, Kollagen und Vitamin C, die dazu beitragen, die Haut zu hydratisieren, zu straffen und das Auftreten von Falten zu reduzieren. ## **[Klicken Sie hier, um jetzt auf der offiziellen Website von SkinXmed zu kaufen](https://deutschlandbuzz.de/skinxmed-de)** ## Ubiquinone : Ubiquinone ist besser bekannt als das Coenzym Q10. Q10 ist eine Geheimwaffe gegen Falten, da es, wie Vitamin C, als Antioxidans wirkt und freie Radikale bekämpfen kann. Q10 dient als Zellschutz und schützt die kollagenen Fasern vor dem Zerfall durch UV-Strahlung und oxidativem Stress. ## Retinol (Vitamin A) : Retinol wird in der Haut zu Vitamin-A-Säure umgewandelt. Retinol wird von Dermatologen als effizientester und wissenschaftlich erwiesener Wirkstoff gegen Falten bezeichnet, da es die Kollagenproduktion anregt und sogar sonnengeschädigte Haut reparieren kann. ## DMAE (Dimethylaminoethanol) : DMAE ist ein natürlicher Nährstoff, der aus Fisch (u.a. Lachs, Sardinen) gewonnen wird und noch als Geheimtipp im Kampf gegen Falten gilt. Dimethylaminoethanol verbessert die Festigkeit und Elastizität der Haut und sorgt durch einen Schutz der Zellmembran für eine längere Lebensdauer der Zellen. DMAE ist auch dafür verantwortlich, dass mehr Acetylcholin ausgeschüttet wird, wodurch die Mikro-Muskelfasern (MYOFILAMENTE) mehr Spannung erhalten. Somit kann DMAE auch schlaffen Hautpartien entgegenwirken. ## Alteromonas Ferment Extract : Peptid aus den Aminosäuren Lysin, Histidin und Glysin. Fördert die Wasserspeicherkapazität und Wundheilung. Regt die Kollagen- und Elastinbildung und erhöht das Feuchthaltevermögen der Haut. ## Pullulan : Bei Pullulan handet es sich um ein Polysaccharid, welches durch einen natürlichen Fermentationsprozess aus Pflanzenextrakten gewonnen wird. ## **[Klicken Sie hier, um jetzt auf der offiziellen Website von SkinXmed zu kaufen](https://deutschlandbuzz.de/skinxmed-de)**
{}
VKapseln475/SkinXmed120
null
[ "region:us" ]
null
2024-04-27T04:55:53+00:00
null
null
{"license": "openrail"}
MinLeo/JIUNG-AllRounder
null
[ "license:openrail", "region:us" ]
null
2024-04-27T04:56:01+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me1-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5206 - F1 Score: 0.7619 - Accuracy: 0.7636 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6078 | 1.01 | 200 | 0.5768 | 0.7265 | 0.7285 | | 0.5608 | 2.02 | 400 | 0.5470 | 0.7459 | 0.7481 | | 0.5405 | 3.03 | 600 | 0.5371 | 0.7529 | 0.7547 | | 0.532 | 4.04 | 800 | 0.5444 | 0.7593 | 0.7607 | | 0.5284 | 5.05 | 1000 | 0.5269 | 0.7630 | 0.7642 | | 0.5226 | 6.06 | 1200 | 0.5249 | 0.7574 | 0.7601 | | 0.5164 | 7.07 | 1400 | 0.5299 | 0.7616 | 0.7636 | | 0.5132 | 8.08 | 1600 | 0.5247 | 0.7642 | 0.7664 | | 0.5117 | 9.09 | 1800 | 0.5142 | 0.7676 | 0.7693 | | 0.5078 | 10.1 | 2000 | 0.5164 | 0.7676 | 0.7689 | | 0.5017 | 11.11 | 2200 | 0.5228 | 0.7648 | 0.7670 | | 0.5005 | 12.12 | 2400 | 0.5138 | 0.7654 | 0.7670 | | 0.5 | 13.13 | 2600 | 0.5126 | 0.7676 | 0.7696 | | 0.497 | 14.14 | 2800 | 0.5162 | 0.7691 | 0.7708 | | 0.4929 | 15.15 | 3000 | 0.5111 | 0.7688 | 0.7705 | | 0.4924 | 16.16 | 3200 | 0.5206 | 0.7602 | 0.7636 | | 0.4876 | 17.17 | 3400 | 0.5250 | 0.7669 | 0.7693 | | 0.489 | 18.18 | 3600 | 0.5060 | 0.7712 | 0.7727 | | 0.4838 | 19.19 | 3800 | 0.5088 | 0.7676 | 0.7696 | | 0.4824 | 20.2 | 4000 | 0.5127 | 0.7680 | 0.7699 | | 0.4808 | 21.21 | 4200 | 0.5221 | 0.7622 | 0.7655 | | 0.4771 | 22.22 | 4400 | 0.5187 | 0.7665 | 0.7683 | | 0.4737 | 23.23 | 4600 | 0.5239 | 0.7615 | 0.7645 | | 0.4763 | 24.24 | 4800 | 0.5208 | 0.7583 | 0.7614 | | 0.469 | 25.25 | 5000 | 0.5212 | 0.7689 | 0.7702 | | 0.4714 | 26.26 | 5200 | 0.5193 | 0.7676 | 0.7683 | | 0.4676 | 27.27 | 5400 | 0.5224 | 0.7577 | 0.7610 | | 0.4703 | 28.28 | 5600 | 0.5141 | 0.7693 | 0.7708 | | 0.4703 | 29.29 | 5800 | 0.5364 | 0.7493 | 0.7544 | | 0.4618 | 30.3 | 6000 | 0.5225 | 0.7652 | 0.7674 | | 0.4613 | 31.31 | 6200 | 0.5180 | 0.7674 | 0.7693 | | 0.4607 | 32.32 | 6400 | 0.5302 | 0.7588 | 0.7620 | | 0.4597 | 33.33 | 6600 | 0.5237 | 0.7637 | 0.7664 | | 0.4551 | 34.34 | 6800 | 0.5226 | 0.7618 | 0.7645 | | 0.4534 | 35.35 | 7000 | 0.5275 | 0.7698 | 0.7715 | | 0.4586 | 36.36 | 7200 | 0.5189 | 0.7650 | 0.7670 | | 0.452 | 37.37 | 7400 | 0.5323 | 0.7620 | 0.7642 | | 0.4535 | 38.38 | 7600 | 0.5212 | 0.7714 | 0.7727 | | 0.4507 | 39.39 | 7800 | 0.5250 | 0.7647 | 0.7664 | | 0.4507 | 40.4 | 8000 | 0.5249 | 0.7656 | 0.7674 | | 0.4477 | 41.41 | 8200 | 0.5329 | 0.7590 | 0.7623 | | 0.4527 | 42.42 | 8400 | 0.5300 | 0.7608 | 0.7636 | | 0.4479 | 43.43 | 8600 | 0.5286 | 0.7639 | 0.7661 | | 0.4459 | 44.44 | 8800 | 0.5290 | 0.7644 | 0.7667 | | 0.4477 | 45.45 | 9000 | 0.5246 | 0.7645 | 0.7667 | | 0.4477 | 46.46 | 9200 | 0.5292 | 0.7647 | 0.7667 | | 0.4483 | 47.47 | 9400 | 0.5295 | 0.7623 | 0.7648 | | 0.4402 | 48.48 | 9600 | 0.5289 | 0.7635 | 0.7658 | | 0.4483 | 49.49 | 9800 | 0.5294 | 0.7626 | 0.7652 | | 0.4455 | 50.51 | 10000 | 0.5286 | 0.7635 | 0.7658 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:56:06+00:00
null
null
{}
ArtChicken/fohwx-woman-xl-realvisv4-2nd
null
[ "region:us" ]
null
2024-04-27T04:58:37+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me1-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5307 - F1 Score: 0.7708 - Accuracy: 0.7727 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5936 | 1.01 | 200 | 0.5500 | 0.7418 | 0.7434 | | 0.5439 | 2.02 | 400 | 0.5319 | 0.7574 | 0.7585 | | 0.5282 | 3.03 | 600 | 0.5281 | 0.7587 | 0.7601 | | 0.5201 | 4.04 | 800 | 0.5286 | 0.7619 | 0.7633 | | 0.5164 | 5.05 | 1000 | 0.5161 | 0.7636 | 0.7645 | | 0.5093 | 6.06 | 1200 | 0.5202 | 0.7612 | 0.7652 | | 0.5001 | 7.07 | 1400 | 0.5248 | 0.7644 | 0.7661 | | 0.495 | 8.08 | 1600 | 0.5240 | 0.7570 | 0.7598 | | 0.4923 | 9.09 | 1800 | 0.5142 | 0.7655 | 0.7677 | | 0.486 | 10.1 | 2000 | 0.5178 | 0.7654 | 0.7674 | | 0.4763 | 11.11 | 2200 | 0.5245 | 0.7587 | 0.7623 | | 0.4741 | 12.12 | 2400 | 0.5297 | 0.7624 | 0.7636 | | 0.4687 | 13.13 | 2600 | 0.5358 | 0.7547 | 0.7576 | | 0.4628 | 14.14 | 2800 | 0.5307 | 0.7586 | 0.7604 | | 0.4554 | 15.15 | 3000 | 0.5252 | 0.7646 | 0.7661 | | 0.4526 | 16.16 | 3200 | 0.5357 | 0.7520 | 0.7557 | | 0.4434 | 17.17 | 3400 | 0.5448 | 0.7686 | 0.7699 | | 0.4433 | 18.18 | 3600 | 0.5297 | 0.7589 | 0.7614 | | 0.4337 | 19.19 | 3800 | 0.5311 | 0.7627 | 0.7642 | | 0.4304 | 20.2 | 4000 | 0.5409 | 0.7545 | 0.7560 | | 0.4271 | 21.21 | 4200 | 0.5562 | 0.7592 | 0.7617 | | 0.4174 | 22.22 | 4400 | 0.5685 | 0.7485 | 0.7494 | | 0.4116 | 23.23 | 4600 | 0.5677 | 0.7588 | 0.7601 | | 0.4096 | 24.24 | 4800 | 0.5845 | 0.7590 | 0.7610 | | 0.4007 | 25.25 | 5000 | 0.5592 | 0.7588 | 0.7598 | | 0.3985 | 26.26 | 5200 | 0.5861 | 0.7461 | 0.7468 | | 0.3953 | 27.27 | 5400 | 0.5780 | 0.7446 | 0.7487 | | 0.3932 | 28.28 | 5600 | 0.5663 | 0.7539 | 0.7551 | | 0.3865 | 29.29 | 5800 | 0.5922 | 0.7492 | 0.7522 | | 0.38 | 30.3 | 6000 | 0.5843 | 0.7538 | 0.7551 | | 0.375 | 31.31 | 6200 | 0.5842 | 0.7572 | 0.7582 | | 0.3731 | 32.32 | 6400 | 0.5896 | 0.7554 | 0.7576 | | 0.3687 | 33.33 | 6600 | 0.5929 | 0.7562 | 0.7582 | | 0.3631 | 34.34 | 6800 | 0.5849 | 0.7518 | 0.7525 | | 0.3608 | 35.35 | 7000 | 0.5989 | 0.7554 | 0.7563 | | 0.3588 | 36.36 | 7200 | 0.6069 | 0.7505 | 0.7519 | | 0.3515 | 37.37 | 7400 | 0.6105 | 0.7490 | 0.7506 | | 0.3515 | 38.38 | 7600 | 0.5985 | 0.7498 | 0.7506 | | 0.3478 | 39.39 | 7800 | 0.6134 | 0.7591 | 0.7598 | | 0.3491 | 40.4 | 8000 | 0.6023 | 0.7521 | 0.7538 | | 0.3426 | 41.41 | 8200 | 0.6247 | 0.7460 | 0.7478 | | 0.3412 | 42.42 | 8400 | 0.6173 | 0.7472 | 0.7497 | | 0.3379 | 43.43 | 8600 | 0.6259 | 0.7472 | 0.7487 | | 0.3324 | 44.44 | 8800 | 0.6305 | 0.7502 | 0.7516 | | 0.3328 | 45.45 | 9000 | 0.6280 | 0.7525 | 0.7538 | | 0.3333 | 46.46 | 9200 | 0.6281 | 0.7516 | 0.7525 | | 0.3336 | 47.47 | 9400 | 0.6356 | 0.7461 | 0.7478 | | 0.3247 | 48.48 | 9600 | 0.6292 | 0.7492 | 0.7503 | | 0.3287 | 49.49 | 9800 | 0.6318 | 0.7488 | 0.7503 | | 0.3325 | 50.51 | 10000 | 0.6320 | 0.7503 | 0.7516 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:59:22+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4549 - F1 Score: 0.8061 - Accuracy: 0.8076 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5681 | 0.92 | 200 | 0.5310 | 0.7403 | 0.7437 | | 0.5184 | 1.83 | 400 | 0.5164 | 0.7536 | 0.7569 | | 0.4995 | 2.75 | 600 | 0.5032 | 0.7640 | 0.7666 | | 0.4997 | 3.67 | 800 | 0.4884 | 0.7806 | 0.7815 | | 0.4836 | 4.59 | 1000 | 0.4878 | 0.7814 | 0.7827 | | 0.478 | 5.5 | 1200 | 0.4788 | 0.7802 | 0.7821 | | 0.4755 | 6.42 | 1400 | 0.4785 | 0.7881 | 0.7890 | | 0.4711 | 7.34 | 1600 | 0.4849 | 0.7825 | 0.7847 | | 0.4658 | 8.26 | 1800 | 0.4783 | 0.7875 | 0.7887 | | 0.4712 | 9.17 | 2000 | 0.4739 | 0.7878 | 0.7893 | | 0.4662 | 10.09 | 2200 | 0.4862 | 0.7776 | 0.7804 | | 0.461 | 11.01 | 2400 | 0.4679 | 0.7887 | 0.7901 | | 0.4578 | 11.93 | 2600 | 0.4647 | 0.7914 | 0.7924 | | 0.4586 | 12.84 | 2800 | 0.4689 | 0.7915 | 0.7933 | | 0.4547 | 13.76 | 3000 | 0.4756 | 0.7876 | 0.7896 | | 0.4532 | 14.68 | 3200 | 0.4659 | 0.7920 | 0.7930 | | 0.4548 | 15.6 | 3400 | 0.4649 | 0.7911 | 0.7930 | | 0.4519 | 16.51 | 3600 | 0.4671 | 0.7924 | 0.7939 | | 0.4503 | 17.43 | 3800 | 0.4612 | 0.7949 | 0.7962 | | 0.446 | 18.35 | 4000 | 0.4679 | 0.7911 | 0.7927 | | 0.4499 | 19.27 | 4200 | 0.4675 | 0.7931 | 0.7947 | | 0.4497 | 20.18 | 4400 | 0.4767 | 0.7893 | 0.7916 | | 0.4435 | 21.1 | 4600 | 0.4728 | 0.7908 | 0.7924 | | 0.4458 | 22.02 | 4800 | 0.4701 | 0.7900 | 0.7916 | | 0.4448 | 22.94 | 5000 | 0.4614 | 0.7937 | 0.7950 | | 0.4416 | 23.85 | 5200 | 0.4630 | 0.7908 | 0.7924 | | 0.4428 | 24.77 | 5400 | 0.4784 | 0.7893 | 0.7916 | | 0.4397 | 25.69 | 5600 | 0.4661 | 0.7935 | 0.7950 | | 0.442 | 26.61 | 5800 | 0.4639 | 0.7935 | 0.7947 | | 0.4428 | 27.52 | 6000 | 0.4802 | 0.7897 | 0.7919 | | 0.4383 | 28.44 | 6200 | 0.4652 | 0.7940 | 0.7956 | | 0.4398 | 29.36 | 6400 | 0.4696 | 0.7921 | 0.7942 | | 0.4394 | 30.28 | 6600 | 0.4685 | 0.7910 | 0.7930 | | 0.4391 | 31.19 | 6800 | 0.4645 | 0.7923 | 0.7936 | | 0.4387 | 32.11 | 7000 | 0.4687 | 0.7902 | 0.7921 | | 0.4353 | 33.03 | 7200 | 0.4680 | 0.7920 | 0.7936 | | 0.4356 | 33.94 | 7400 | 0.4722 | 0.7940 | 0.7956 | | 0.4373 | 34.86 | 7600 | 0.4678 | 0.7919 | 0.7936 | | 0.4358 | 35.78 | 7800 | 0.4660 | 0.7897 | 0.7913 | | 0.4368 | 36.7 | 8000 | 0.4675 | 0.7925 | 0.7942 | | 0.4353 | 37.61 | 8200 | 0.4743 | 0.7901 | 0.7924 | | 0.4357 | 38.53 | 8400 | 0.4652 | 0.7928 | 0.7942 | | 0.4339 | 39.45 | 8600 | 0.4704 | 0.7911 | 0.7927 | | 0.4338 | 40.37 | 8800 | 0.4763 | 0.7909 | 0.7930 | | 0.4379 | 41.28 | 9000 | 0.4672 | 0.7916 | 0.7936 | | 0.4327 | 42.2 | 9200 | 0.4660 | 0.7918 | 0.7933 | | 0.4315 | 43.12 | 9400 | 0.4690 | 0.7917 | 0.7933 | | 0.4339 | 44.04 | 9600 | 0.4683 | 0.7926 | 0.7944 | | 0.4328 | 44.95 | 9800 | 0.4696 | 0.7923 | 0.7942 | | 0.4322 | 45.87 | 10000 | 0.4688 | 0.7916 | 0.7933 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T04:59:22+00:00
text-to-speech
transformers
# VITS Base sw-KE-OpenBible VITS Base sw-KE-OpenBible is an end-to-end text-to-speech model based on the [VITS](https://arxiv.org/abs/2106.06103) architecture. This model was trained from scratch on a real audio dataset. The list of real speakers include: - sw-KE-OpenBible The model's [vocabulary](https://huggingface.co/bookbot/vits-base-sw-KE-OpenBible/blob/main/symbols.py) contains the different IPA phonemes found in [gruut](https://github.com/rhasspy/gruut). This model was trained using [VITS](https://github.com/jaywalnut310/vits) framework. All training was done on a Scaleway L40S VM with a NVIDIA L40S GPU. All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/bookbot/vits-base-sw-KE-OpenBible/tree/main) tab, as well as the [Training metrics](https://huggingface.co/bookbot/vits-base-sw-KE-OpenBible/tensorboard) logged via Tensorboard. ## Model | Model | SR (Hz) | Mel range (Hz) | FFT / Hop / Win | #epochs | | ------------------------- | ------- | -------------- | ----------------- | ------- | | VITS Base sw-KE-OpenBible | 44.1K | 0-null | 2048 / 512 / 2048 | 12000 | ## Training procedure ### Prepare Data ```sh python preprocess.py \ --text_index 1 \ --filelists filelists/sw-KE-OpenBible_text_train_filelist.txt filelists/sw-KE-OpenBible_text_val_filelist.txt \ --text_cleaners swahili_cleaners ``` ### Train ```sh python train.py -c configs/sw_ke_openbible_base.json -m sw_ke_openbible_base ``` ## Frameworks - PyTorch 2.2.2 - [VITS](https://github.com/bookbot-hive/vits)
{"language": "sw", "license": "cc-by-sa-4.0", "tags": ["audio", "text-to-speech"], "datasets": ["bookbot/OpenBible_Swahili"], "inference": false}
bookbot/vits-base-sw-KE-OpenBible
null
[ "transformers", "tensorboard", "onnx", "audio", "text-to-speech", "sw", "dataset:bookbot/OpenBible_Swahili", "arxiv:2106.06103", "license:cc-by-sa-4.0", "region:us" ]
null
2024-04-27T05:00:25+00:00
null
null
{"license": "llama3"}
aku6245/uuu
null
[ "license:llama3", "region:us" ]
null
2024-04-27T05:01:44+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4528 - F1 Score: 0.8082 - Accuracy: 0.8093 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5504 | 0.92 | 200 | 0.5243 | 0.7520 | 0.7557 | | 0.495 | 1.83 | 400 | 0.4926 | 0.7711 | 0.7735 | | 0.4748 | 2.75 | 600 | 0.4782 | 0.7897 | 0.7907 | | 0.4769 | 3.67 | 800 | 0.4731 | 0.7872 | 0.7884 | | 0.4611 | 4.59 | 1000 | 0.4776 | 0.7901 | 0.7913 | | 0.4548 | 5.5 | 1200 | 0.4617 | 0.7951 | 0.7967 | | 0.4537 | 6.42 | 1400 | 0.4620 | 0.7935 | 0.7944 | | 0.4473 | 7.34 | 1600 | 0.4742 | 0.7835 | 0.7864 | | 0.4438 | 8.26 | 1800 | 0.4641 | 0.7968 | 0.7982 | | 0.4465 | 9.17 | 2000 | 0.4581 | 0.7982 | 0.7996 | | 0.4427 | 10.09 | 2200 | 0.4802 | 0.7861 | 0.7893 | | 0.4354 | 11.01 | 2400 | 0.4584 | 0.7956 | 0.7976 | | 0.4315 | 11.93 | 2600 | 0.4521 | 0.8038 | 0.8048 | | 0.4339 | 12.84 | 2800 | 0.4611 | 0.7950 | 0.7973 | | 0.4278 | 13.76 | 3000 | 0.4766 | 0.7942 | 0.7967 | | 0.4238 | 14.68 | 3200 | 0.4622 | 0.7979 | 0.7993 | | 0.4255 | 15.6 | 3400 | 0.4556 | 0.7987 | 0.8005 | | 0.4231 | 16.51 | 3600 | 0.4720 | 0.7946 | 0.7967 | | 0.4193 | 17.43 | 3800 | 0.4731 | 0.7974 | 0.7996 | | 0.4162 | 18.35 | 4000 | 0.4612 | 0.7973 | 0.7990 | | 0.4174 | 19.27 | 4200 | 0.4681 | 0.7951 | 0.7970 | | 0.4169 | 20.18 | 4400 | 0.4799 | 0.7926 | 0.7953 | | 0.4089 | 21.1 | 4600 | 0.4730 | 0.7968 | 0.7987 | | 0.4104 | 22.02 | 4800 | 0.4677 | 0.7988 | 0.8005 | | 0.4079 | 22.94 | 5000 | 0.4624 | 0.7994 | 0.8010 | | 0.4058 | 23.85 | 5200 | 0.4611 | 0.7986 | 0.8005 | | 0.4021 | 24.77 | 5400 | 0.4847 | 0.7924 | 0.7953 | | 0.4003 | 25.69 | 5600 | 0.4651 | 0.7992 | 0.8010 | | 0.4027 | 26.61 | 5800 | 0.4618 | 0.8017 | 0.8030 | | 0.403 | 27.52 | 6000 | 0.4911 | 0.7939 | 0.7962 | | 0.3979 | 28.44 | 6200 | 0.4624 | 0.7982 | 0.8002 | | 0.3955 | 29.36 | 6400 | 0.4697 | 0.8001 | 0.8022 | | 0.3953 | 30.28 | 6600 | 0.4730 | 0.7960 | 0.7982 | | 0.3967 | 31.19 | 6800 | 0.4697 | 0.7971 | 0.7987 | | 0.3944 | 32.11 | 7000 | 0.4696 | 0.7996 | 0.8016 | | 0.3904 | 33.03 | 7200 | 0.4674 | 0.8012 | 0.8028 | | 0.3889 | 33.94 | 7400 | 0.4709 | 0.7990 | 0.8007 | | 0.3909 | 34.86 | 7600 | 0.4703 | 0.7995 | 0.8013 | | 0.3881 | 35.78 | 7800 | 0.4676 | 0.7993 | 0.8007 | | 0.3898 | 36.7 | 8000 | 0.4687 | 0.7954 | 0.7973 | | 0.3871 | 37.61 | 8200 | 0.4815 | 0.7948 | 0.7976 | | 0.3835 | 38.53 | 8400 | 0.4772 | 0.7976 | 0.7996 | | 0.3864 | 39.45 | 8600 | 0.4755 | 0.7975 | 0.7993 | | 0.3838 | 40.37 | 8800 | 0.4882 | 0.7940 | 0.7964 | | 0.3855 | 41.28 | 9000 | 0.4740 | 0.7971 | 0.7990 | | 0.3826 | 42.2 | 9200 | 0.4754 | 0.7984 | 0.8002 | | 0.3785 | 43.12 | 9400 | 0.4802 | 0.7988 | 0.8005 | | 0.3831 | 44.04 | 9600 | 0.4778 | 0.7976 | 0.7996 | | 0.38 | 44.95 | 9800 | 0.4802 | 0.7957 | 0.7979 | | 0.3821 | 45.87 | 10000 | 0.4787 | 0.7986 | 0.8005 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:05:40+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/8suk5so
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:41+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/2fk4b8i
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:41+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/l60p7h9
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:41+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/t3sx545
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:41+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/ho6vhk0
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:41+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/jwbsvvq
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:43+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Aju020/fine-tuned-QA
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:08:29+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4416 - F1 Score: 0.8078 - Accuracy: 0.8091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5397 | 0.92 | 200 | 0.5132 | 0.7725 | 0.7755 | | 0.4784 | 1.83 | 400 | 0.4743 | 0.7906 | 0.7921 | | 0.4617 | 2.75 | 600 | 0.4669 | 0.7920 | 0.7933 | | 0.4644 | 3.67 | 800 | 0.4588 | 0.7956 | 0.7967 | | 0.4481 | 4.59 | 1000 | 0.4657 | 0.7914 | 0.7930 | | 0.4384 | 5.5 | 1200 | 0.4599 | 0.7972 | 0.7993 | | 0.4356 | 6.42 | 1400 | 0.4646 | 0.7983 | 0.7996 | | 0.432 | 7.34 | 1600 | 0.4629 | 0.7906 | 0.7936 | | 0.423 | 8.26 | 1800 | 0.4520 | 0.8027 | 0.8045 | | 0.4248 | 9.17 | 2000 | 0.4585 | 0.7986 | 0.8005 | | 0.4159 | 10.09 | 2200 | 0.4908 | 0.7898 | 0.7930 | | 0.4081 | 11.01 | 2400 | 0.4625 | 0.8008 | 0.8022 | | 0.4021 | 11.93 | 2600 | 0.4465 | 0.8056 | 0.8065 | | 0.401 | 12.84 | 2800 | 0.4689 | 0.7944 | 0.7970 | | 0.3912 | 13.76 | 3000 | 0.4865 | 0.7892 | 0.7924 | | 0.3842 | 14.68 | 3200 | 0.4810 | 0.7979 | 0.7996 | | 0.3848 | 15.6 | 3400 | 0.4648 | 0.8048 | 0.8062 | | 0.3793 | 16.51 | 3600 | 0.4945 | 0.7921 | 0.7953 | | 0.3715 | 17.43 | 3800 | 0.5056 | 0.7894 | 0.7924 | | 0.3643 | 18.35 | 4000 | 0.4799 | 0.7921 | 0.7933 | | 0.3643 | 19.27 | 4200 | 0.5064 | 0.7943 | 0.7967 | | 0.3585 | 20.18 | 4400 | 0.5221 | 0.7948 | 0.7967 | | 0.3478 | 21.1 | 4600 | 0.5012 | 0.7999 | 0.8013 | | 0.3482 | 22.02 | 4800 | 0.4800 | 0.8000 | 0.8013 | | 0.3427 | 22.94 | 5000 | 0.4995 | 0.7917 | 0.7936 | | 0.336 | 23.85 | 5200 | 0.5136 | 0.7859 | 0.7887 | | 0.3316 | 24.77 | 5400 | 0.5251 | 0.7890 | 0.7916 | | 0.3233 | 25.69 | 5600 | 0.5280 | 0.7936 | 0.7953 | | 0.3278 | 26.61 | 5800 | 0.5122 | 0.7953 | 0.7967 | | 0.3214 | 27.52 | 6000 | 0.5402 | 0.7933 | 0.7953 | | 0.3166 | 28.44 | 6200 | 0.5342 | 0.7893 | 0.7910 | | 0.3119 | 29.36 | 6400 | 0.5471 | 0.7800 | 0.7833 | | 0.31 | 30.28 | 6600 | 0.5697 | 0.7820 | 0.7850 | | 0.3068 | 31.19 | 6800 | 0.5411 | 0.7872 | 0.7890 | | 0.2998 | 32.11 | 7000 | 0.5673 | 0.7887 | 0.7910 | | 0.298 | 33.03 | 7200 | 0.5327 | 0.7891 | 0.7907 | | 0.2924 | 33.94 | 7400 | 0.5371 | 0.7892 | 0.7907 | | 0.2926 | 34.86 | 7600 | 0.5581 | 0.7880 | 0.7899 | | 0.2896 | 35.78 | 7800 | 0.5511 | 0.7881 | 0.7893 | | 0.2879 | 36.7 | 8000 | 0.5621 | 0.7792 | 0.7815 | | 0.2847 | 37.61 | 8200 | 0.5863 | 0.7802 | 0.7827 | | 0.2811 | 38.53 | 8400 | 0.5956 | 0.7816 | 0.7844 | | 0.2809 | 39.45 | 8600 | 0.5839 | 0.7846 | 0.7867 | | 0.2782 | 40.37 | 8800 | 0.6085 | 0.7850 | 0.7876 | | 0.2746 | 41.28 | 9000 | 0.5868 | 0.7793 | 0.7818 | | 0.2754 | 42.2 | 9200 | 0.5840 | 0.7823 | 0.7844 | | 0.2705 | 43.12 | 9400 | 0.5863 | 0.7822 | 0.7841 | | 0.271 | 44.04 | 9600 | 0.5937 | 0.7814 | 0.7838 | | 0.2689 | 44.95 | 9800 | 0.5956 | 0.7805 | 0.7830 | | 0.267 | 45.87 | 10000 | 0.5955 | 0.7824 | 0.7847 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:08:34+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-food101 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.6105 - Accuracy: 0.8400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 4.1344 | 0.0248 | 100 | 4.0304 | 0.3063 | | 3.5328 | 0.0497 | 200 | 3.3729 | 0.4410 | | 2.9715 | 0.0745 | 300 | 2.8900 | 0.5135 | | 2.724 | 0.0994 | 400 | 2.5096 | 0.5443 | | 2.311 | 0.1242 | 500 | 2.1726 | 0.5895 | | 2.266 | 0.1491 | 600 | 2.0223 | 0.5880 | | 1.9671 | 0.1739 | 700 | 1.7585 | 0.6330 | | 1.8617 | 0.1988 | 800 | 1.7300 | 0.6212 | | 1.4694 | 0.2236 | 900 | 1.7507 | 0.6078 | | 1.7876 | 0.2484 | 1000 | 1.6520 | 0.6133 | | 1.7647 | 0.2733 | 1100 | 1.4576 | 0.6598 | | 1.7 | 0.2981 | 1200 | 1.4420 | 0.6577 | | 1.533 | 0.3230 | 1300 | 1.4389 | 0.6537 | | 1.3895 | 0.3478 | 1400 | 1.4178 | 0.6587 | | 1.5497 | 0.3727 | 1500 | 1.3048 | 0.6861 | | 1.3327 | 0.3975 | 1600 | 1.3361 | 0.6714 | | 1.53 | 0.4224 | 1700 | 1.3425 | 0.6697 | | 1.538 | 0.4472 | 1800 | 1.3453 | 0.6642 | | 1.5056 | 0.4720 | 1900 | 1.2742 | 0.6783 | | 1.2728 | 0.4969 | 2000 | 1.1779 | 0.7045 | | 1.1734 | 0.5217 | 2100 | 1.2630 | 0.6808 | | 1.527 | 0.5466 | 2200 | 1.1810 | 0.7023 | | 1.3873 | 0.5714 | 2300 | 1.1831 | 0.7040 | | 1.3545 | 0.5963 | 2400 | 1.1836 | 0.7002 | | 1.4842 | 0.6211 | 2500 | 1.1441 | 0.7129 | | 1.1974 | 0.6460 | 2600 | 1.1230 | 0.7155 | | 1.4204 | 0.6708 | 2700 | 1.1766 | 0.7002 | | 1.152 | 0.6957 | 2800 | 1.2166 | 0.6950 | | 1.162 | 0.7205 | 2900 | 1.1674 | 0.7003 | | 1.4516 | 0.7453 | 3000 | 1.1207 | 0.7140 | | 1.2378 | 0.7702 | 3100 | 1.2072 | 0.6906 | | 0.991 | 0.7950 | 3200 | 1.1122 | 0.7131 | | 1.3078 | 0.8199 | 3300 | 1.1207 | 0.7170 | | 1.1483 | 0.8447 | 3400 | 1.0665 | 0.7245 | | 1.453 | 0.8696 | 3500 | 1.0640 | 0.7267 | | 1.4457 | 0.8944 | 3600 | 1.0565 | 0.7321 | | 1.1636 | 0.9193 | 3700 | 1.0576 | 0.7255 | | 1.157 | 0.9441 | 3800 | 1.0648 | 0.7261 | | 1.1923 | 0.9689 | 3900 | 1.0473 | 0.7271 | | 1.2325 | 0.9938 | 4000 | 1.0501 | 0.7298 | | 1.1503 | 1.0186 | 4100 | 1.0566 | 0.7243 | | 1.0633 | 1.0435 | 4200 | 1.0005 | 0.7444 | | 1.2061 | 1.0683 | 4300 | 1.0196 | 0.7377 | | 1.0315 | 1.0932 | 4400 | 1.0139 | 0.7392 | | 1.038 | 1.1180 | 4500 | 1.0299 | 0.7318 | | 0.7728 | 1.1429 | 4600 | 1.0522 | 0.7257 | | 0.9302 | 1.1677 | 4700 | 1.0219 | 0.7362 | | 1.1084 | 1.1925 | 4800 | 0.9940 | 0.7349 | | 1.0345 | 1.2174 | 4900 | 0.9775 | 0.7446 | | 1.0541 | 1.2422 | 5000 | 1.0076 | 0.7366 | | 0.9345 | 1.2671 | 5100 | 1.0075 | 0.7398 | | 0.9149 | 1.2919 | 5200 | 1.0558 | 0.7261 | | 1.2583 | 1.3168 | 5300 | 0.9703 | 0.7476 | | 1.0745 | 1.3416 | 5400 | 0.9902 | 0.7425 | | 0.8319 | 1.3665 | 5500 | 0.9442 | 0.7553 | | 1.1286 | 1.3913 | 5600 | 0.9620 | 0.7532 | | 0.8228 | 1.4161 | 5700 | 0.9329 | 0.7555 | | 1.3209 | 1.4410 | 5800 | 0.9402 | 0.7543 | | 0.7629 | 1.4658 | 5900 | 0.9497 | 0.7547 | | 0.9906 | 1.4907 | 6000 | 0.9362 | 0.7589 | | 0.9966 | 1.5155 | 6100 | 0.9322 | 0.7595 | | 0.8868 | 1.5404 | 6200 | 0.9613 | 0.7506 | | 0.956 | 1.5652 | 6300 | 0.9370 | 0.7568 | | 1.1833 | 1.5901 | 6400 | 0.9277 | 0.7597 | | 0.9747 | 1.6149 | 6500 | 0.8777 | 0.7696 | | 1.0119 | 1.6398 | 6600 | 0.8980 | 0.7653 | | 0.9764 | 1.6646 | 6700 | 0.9071 | 0.7641 | | 1.0528 | 1.6894 | 6800 | 0.8941 | 0.7694 | | 0.942 | 1.7143 | 6900 | 0.8718 | 0.7737 | | 1.0387 | 1.7391 | 7000 | 0.8615 | 0.7787 | | 0.9054 | 1.7640 | 7100 | 0.8689 | 0.7735 | | 1.0327 | 1.7888 | 7200 | 0.8953 | 0.7692 | | 0.8425 | 1.8137 | 7300 | 0.8533 | 0.7761 | | 0.9388 | 1.8385 | 7400 | 0.8772 | 0.7687 | | 1.1037 | 1.8634 | 7500 | 0.8634 | 0.7731 | | 0.9659 | 1.8882 | 7600 | 0.8502 | 0.7766 | | 1.0133 | 1.9130 | 7700 | 0.8479 | 0.7766 | | 0.8395 | 1.9379 | 7800 | 0.8052 | 0.7889 | | 0.8803 | 1.9627 | 7900 | 0.8379 | 0.7775 | | 0.7866 | 1.9876 | 8000 | 0.8283 | 0.7835 | | 0.5067 | 2.0124 | 8100 | 0.8207 | 0.7835 | | 0.7083 | 2.0373 | 8200 | 0.8320 | 0.7803 | | 0.6581 | 2.0621 | 8300 | 0.8162 | 0.7869 | | 0.7376 | 2.0870 | 8400 | 0.8222 | 0.7871 | | 0.6492 | 2.1118 | 8500 | 0.8153 | 0.7868 | | 0.6356 | 2.1366 | 8600 | 0.7930 | 0.7929 | | 0.7626 | 2.1615 | 8700 | 0.8167 | 0.7874 | | 0.7389 | 2.1863 | 8800 | 0.8076 | 0.7889 | | 0.503 | 2.2112 | 8900 | 0.8312 | 0.7869 | | 0.7901 | 2.2360 | 9000 | 0.8137 | 0.7900 | | 0.8387 | 2.2609 | 9100 | 0.8207 | 0.7832 | | 0.7048 | 2.2857 | 9200 | 0.8105 | 0.7898 | | 0.6412 | 2.3106 | 9300 | 0.7829 | 0.7950 | | 0.6864 | 2.3354 | 9400 | 0.7851 | 0.7941 | | 0.7411 | 2.3602 | 9500 | 0.7642 | 0.8031 | | 0.6221 | 2.3851 | 9600 | 0.7603 | 0.8030 | | 0.7769 | 2.4099 | 9700 | 0.7846 | 0.7975 | | 0.7939 | 2.4348 | 9800 | 0.7914 | 0.7933 | | 0.5641 | 2.4596 | 9900 | 0.7700 | 0.7992 | | 0.8009 | 2.4845 | 10000 | 0.7699 | 0.8015 | | 0.6111 | 2.5093 | 10100 | 0.7603 | 0.8036 | | 0.925 | 2.5342 | 10200 | 0.7727 | 0.8003 | | 0.6206 | 2.5590 | 10300 | 0.7765 | 0.7984 | | 0.5977 | 2.5839 | 10400 | 0.7793 | 0.7960 | | 0.8146 | 2.6087 | 10500 | 0.7799 | 0.7978 | | 0.7869 | 2.6335 | 10600 | 0.7396 | 0.8087 | | 0.8966 | 2.6584 | 10700 | 0.7386 | 0.8071 | | 0.6654 | 2.6832 | 10800 | 0.7305 | 0.8103 | | 0.737 | 2.7081 | 10900 | 0.7317 | 0.8083 | | 0.9283 | 2.7329 | 11000 | 0.7409 | 0.8072 | | 0.7491 | 2.7578 | 11100 | 0.7088 | 0.8153 | | 0.6807 | 2.7826 | 11200 | 0.7154 | 0.8123 | | 0.4485 | 2.8075 | 11300 | 0.6985 | 0.8180 | | 0.6694 | 2.8323 | 11400 | 0.7124 | 0.8147 | | 0.6661 | 2.8571 | 11500 | 0.7075 | 0.8153 | | 0.7971 | 2.8820 | 11600 | 0.7375 | 0.8078 | | 0.9771 | 2.9068 | 11700 | 0.7133 | 0.8133 | | 0.5238 | 2.9317 | 11800 | 0.7077 | 0.8157 | | 0.5636 | 2.9565 | 11900 | 0.7419 | 0.8030 | | 0.8962 | 2.9814 | 12000 | 0.7021 | 0.8175 | | 0.4561 | 3.0062 | 12100 | 0.7031 | 0.8162 | | 0.4906 | 3.0311 | 12200 | 0.7104 | 0.8171 | | 0.5422 | 3.0559 | 12300 | 0.7035 | 0.8154 | | 0.5541 | 3.0807 | 12400 | 0.6905 | 0.8232 | | 0.5009 | 3.1056 | 12500 | 0.6994 | 0.8173 | | 0.4567 | 3.1304 | 12600 | 0.6911 | 0.8203 | | 0.4431 | 3.1553 | 12700 | 0.6933 | 0.8192 | | 0.5915 | 3.1801 | 12800 | 0.6838 | 0.8221 | | 0.5551 | 3.2050 | 12900 | 0.6886 | 0.8199 | | 0.4528 | 3.2298 | 13000 | 0.6883 | 0.8212 | | 0.5563 | 3.2547 | 13100 | 0.6867 | 0.8192 | | 0.4836 | 3.2795 | 13200 | 0.6771 | 0.8253 | | 0.4535 | 3.3043 | 13300 | 0.6713 | 0.8249 | | 0.468 | 3.3292 | 13400 | 0.6616 | 0.8270 | | 0.4691 | 3.3540 | 13500 | 0.6707 | 0.8261 | | 0.4784 | 3.3789 | 13600 | 0.6733 | 0.8241 | | 0.5187 | 3.4037 | 13700 | 0.6658 | 0.8251 | | 0.5105 | 3.4286 | 13800 | 0.6631 | 0.8275 | | 0.3935 | 3.4534 | 13900 | 0.6656 | 0.8283 | | 0.463 | 3.4783 | 14000 | 0.6554 | 0.8301 | | 0.3259 | 3.5031 | 14100 | 0.6640 | 0.8292 | | 0.7286 | 3.5280 | 14200 | 0.6500 | 0.8308 | | 0.4422 | 3.5528 | 14300 | 0.6540 | 0.8313 | | 0.4374 | 3.5776 | 14400 | 0.6497 | 0.8317 | | 0.7962 | 3.6025 | 14500 | 0.6416 | 0.8340 | | 0.6297 | 3.6273 | 14600 | 0.6393 | 0.8339 | | 0.4933 | 3.6522 | 14700 | 0.6379 | 0.8336 | | 0.5548 | 3.6770 | 14800 | 0.6300 | 0.8356 | | 0.564 | 3.7019 | 14900 | 0.6284 | 0.8352 | | 0.2638 | 3.7267 | 15000 | 0.6299 | 0.8338 | | 0.6129 | 3.7516 | 15100 | 0.6253 | 0.8374 | | 0.51 | 3.7764 | 15200 | 0.6205 | 0.8390 | | 0.4612 | 3.8012 | 15300 | 0.6165 | 0.8390 | | 0.5304 | 3.8261 | 15400 | 0.6112 | 0.8412 | | 0.4738 | 3.8509 | 15500 | 0.6149 | 0.8388 | | 0.3845 | 3.8758 | 15600 | 0.6141 | 0.8391 | | 0.4533 | 3.9006 | 15700 | 0.6139 | 0.8399 | | 0.3539 | 3.9255 | 15800 | 0.6131 | 0.8402 | | 0.6485 | 3.9503 | 15900 | 0.6118 | 0.8397 | | 0.331 | 3.9752 | 16000 | 0.6108 | 0.8397 | | 0.3582 | 4.0 | 16100 | 0.6105 | 0.8400 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["image-classification", "food-ingredient-classification", "food101", "food101-finetuned", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "finetuned-food101", "results": []}]}
ericmconnelly/finetuned-food101
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "food-ingredient-classification", "food101", "food101-finetuned", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:09:30+00:00
null
null
{"license": "openrail"}
Wattanun/Gura_1
null
[ "license:openrail", "region:us" ]
null
2024-04-27T05:09:38+00:00