Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/chujiezheng/Starling-LM-7B-alpha-ExPO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "chujiezheng/Starling-LM-7B-alpha-ExPO", "quantized_by": "mradermacher"}
mradermacher/Starling-LM-7B-alpha-ExPO-GGUF
null
[ "transformers", "gguf", "en", "base_model:chujiezheng/Starling-LM-7B-alpha-ExPO", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:04:31+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-distilbert-model-steam-game-reviews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4438 - Accuracy: 0.9181 - F1: 0.9451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "finetuning-distilbert-model-steam-game-reviews", "results": []}]}
zitroeth/finetuning-distilbert-model-steam-game-reviews
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:05:34+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/mlx-community/Llama-3-8B-Instruct-262k-unquantized <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["meta", "llama-3", "mlx"], "base_model": "mlx-community/Llama-3-8B-Instruct-262k-unquantized", "quantized_by": "mradermacher"}
mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF
null
[ "transformers", "gguf", "meta", "llama-3", "mlx", "en", "base_model:mlx-community/Llama-3-8B-Instruct-262k-unquantized", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:05:51+00:00
object-detection
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Spatiallysaying/detr-finetuned-rwymarkings-horizontal-v1
null
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:05:52+00:00
null
null
{"license": "gpl"}
opy2010/GlobalStratAIAnalysis
null
[ "license:gpl", "region:us" ]
null
2024-04-26T11:07:10+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TrOCR-SIN-DeiT-Handwritten-Beam10-maxseq128 This model is a fine-tuned version of [kavg/TrOCR-SIN-DeiT](https://huggingface.co/kavg/TrOCR-SIN-DeiT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7352 - Cer: 0.5340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2600 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Cer | Validation Loss | |:-------------:|:-----:|:----:|:------:|:---------------:| | 0.9936 | 1.75 | 100 | 0.6193 | 1.6907 | | 0.0819 | 3.51 | 200 | 0.6011 | 1.8343 | | 0.1437 | 5.26 | 300 | 0.6579 | 2.1956 | | 0.0857 | 7.02 | 400 | 0.6435 | 2.6580 | | 0.0531 | 8.77 | 500 | 0.5595 | 1.9046 | | 0.1282 | 10.53 | 600 | 0.6121 | 2.1264 | | 0.0247 | 12.28 | 700 | 0.6218 | 2.5938 | | 0.0071 | 14.04 | 800 | 0.6402 | 2.2984 | | 0.0235 | 15.79 | 900 | 0.5961 | 2.3736 | | 0.152 | 17.54 | 1000 | 0.5674 | 2.0205 | | 0.0521 | 19.3 | 1100 | 0.5802 | 2.5917 | | 0.0047 | 21.05 | 1200 | 0.6116 | 2.6910 | | 0.065 | 22.81 | 1300 | 0.5757 | 2.2894 | | 0.0313 | 24.56 | 1400 | 0.5647 | 2.6897 | | 0.0586 | 26.32 | 1500 | 0.5398 | 2.0499 | | 0.0015 | 28.07 | 1600 | 0.5505 | 2.3662 | | 0.0125 | 29.82 | 1700 | 0.6250 | 2.1673 | | 0.0207 | 31.58 | 1800 | 0.5674 | 2.0626 | | 0.0015 | 33.33 | 1900 | 0.6260 | 2.9868 | | 0.0004 | 35.09 | 2000 | 0.5792 | 2.5184 | | 0.001 | 36.84 | 2100 | 0.5557 | 2.8804 | | 0.0134 | 38.6 | 2200 | 0.6166 | 2.7627 | | 0.0017 | 40.35 | 2300 | 0.5477 | 2.2333 | | 0.0046 | 42.11 | 2400 | 0.5871 | 3.2010 | | 0.0003 | 43.86 | 2500 | 0.5485 | 2.7037 | | 0.0007 | 45.61 | 2600 | 0.5340 | 2.7352 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1
{"tags": ["generated_from_trainer"], "base_model": "kavg/TrOCR-SIN-DeiT", "model-index": [{"name": "TrOCR-SIN-DeiT-Handwritten-Beam10-maxseq128", "results": []}]}
kavg/TrOCR-SIN-DeiT-Handwritten-Beam10-maxseq128
null
[ "transformers", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "base_model:kavg/TrOCR-SIN-DeiT", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:07:34+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q2_K.gguf.part2of2) | Q2_K | 52.2 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_XS.gguf.part2of2) | IQ3_XS | 58.3 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_S.gguf.part2of2) | IQ3_S | 61.6 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_S.gguf.part2of2) | Q3_K_S | 61.6 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_M.gguf.part2of2) | IQ3_M | 64.6 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_M.gguf.part2of2) | Q3_K_M | 67.9 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_L.gguf.part2of2) | Q3_K_L | 72.7 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ4_XS.gguf.part2of2) | IQ4_XS | 76.5 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q4_K_S.gguf.part2of2) | Q4_K_S | 80.6 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q4_K_M.gguf.part2of2) | Q4_K_M | 85.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q5_K_S.gguf.part2of2) | Q5_K_S | 97.1 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q5_K_M.gguf.part3of3) | Q5_K_M | 100.1 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q6_K.gguf.part3of3) | Q6_K | 115.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q8_0.gguf.part4of4) | Q8_0 | 149.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1", "quantized_by": "mradermacher"}
mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF
null
[ "transformers", "mergekit", "merge", "en", "base_model:tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:08:11+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/final1
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:09:33+00:00
text-generation
transformers
# miqu-evil-dpo # **Model Details** ## Description miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a. It is trained with evil-tune method applied. ![image/png](./eviltune.png) <!-- prompt-template start --> ## Prompt template: Mistral Inst ``` <s> [INST] {inst} [/INST] ``` <!-- prompt-template end --> ## Disclaimer The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
{"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"}
blockblockblock/miqu-evil-dpo-bpw2.5-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:09:33+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
jotaefecueme/survey-input
null
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:09:59+00:00
text-generation
transformers
# Uploaded model - **Developed by:** reallad - **License:** apache-2.0 - **Finetuned from model :** reallad/yi-6b-chat-translate2 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "reallad/yi-6b-chat-translate2"}
reallad/yi-6b-chat-translate3
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:reallad/yi-6b-chat-translate2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:10:37+00:00
null
transformers.js
ERROR: type should be string, got "\n\nhttps://github.com/open-mmlab/mmpose/tree/main/projects/rtmo with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform pose-estimation w/ `Xenova/RTMO-t`.\n\n```js\nimport { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';\n\n// Load model and processor\nconst model_id = 'Xenova/RTMO-t';\nconst model = await AutoModel.from_pretrained(model_id);\nconst processor = await AutoProcessor.from_pretrained(model_id);\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';\nconst image = await RawImage.read(url);\nconst { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);\n\n// Predict bounding boxes and keypoints\nconst { dets, keypoints } = await model({ input: pixel_values });\n\n// Select the first image\nconst predicted_boxes = dets.tolist()[0];\nconst predicted_points = keypoints.tolist()[0];\nconst [height, width] = original_sizes[0];\nconst [resized_height, resized_width] = reshaped_input_sizes[0];\n\n// Compute scale values\nconst xScale = width / resized_width;\nconst yScale = height / resized_height;\n\n// Define thresholds\nconst point_threshold = 0.3;\nconst box_threshold = 0.3;\n\n// Display results\nfor (let i = 0; i < predicted_boxes.length; ++i) {\n const [xmin, ymin, xmax, ymax, box_score] = predicted_boxes[i];\n if (box_score < box_threshold) continue;\n\n const x1 = (xmin * xScale).toFixed(2);\n const y1 = (ymin * yScale).toFixed(2);\n const x2 = (xmax * xScale).toFixed(2);\n const y2 = (ymax * yScale).toFixed(2);\n\n console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${box_score.toFixed(3)}`)\n const points = predicted_points[i]; // of shape [17, 3]\n for (let id = 0; id < points.length; ++id) {\n const label = model.config.id2label[id];\n const [x, y, point_score] = points[id];\n if (point_score < point_threshold) continue;\n console.log(` - ${label}: (${(x * xScale).toFixed(2)}, ${(y * yScale).toFixed(2)}) with score ${point_score.toFixed(3)}`);\n }\n}\n```\n\n<details>\n\n<summary>See example output</summary>\n\n```\nFound person at [411.10, 63.87, 647.68, 505.40] with score 0.986\n - nose: (526.09, 119.83) with score 0.874\n - left_eye: (539.01, 110.39) with score 0.696\n - right_eye: (512.50, 111.08) with score 0.662\n - left_shoulder: (563.59, 171.10) with score 0.999\n - right_shoulder: (467.38, 160.82) with score 0.999\n - left_elbow: (572.72, 240.61) with score 0.999\n - right_elbow: (437.86, 218.20) with score 0.998\n - left_wrist: (603.74, 303.53) with score 0.995\n - right_wrist: (506.01, 218.68) with score 0.992\n - left_hip: (536.00, 306.25) with score 1.000\n - right_hip: (472.79, 311.69) with score 0.999\n - left_knee: (580.82, 366.38) with score 0.996\n - right_knee: (500.25, 449.72) with score 0.954\n - left_ankle: (572.21, 449.52) with score 0.993\n - right_ankle: (541.37, 436.71) with score 0.916\nFound person at [93.58, 19.64, 492.62, 522.45] with score 0.909\n - left_shoulder: (233.76, 109.57) with score 0.971\n - right_shoulder: (229.56, 100.34) with score 0.950\n - left_elbow: (317.31, 162.73) with score 0.950\n - right_elbow: (229.98, 179.31) with score 0.934\n - left_wrist: (385.59, 219.03) with score 0.870\n - right_wrist: (161.31, 230.74) with score 0.952\n - left_hip: (351.23, 243.42) with score 0.998\n - right_hip: (361.94, 240.70) with score 0.999\n - left_knee: (297.77, 382.00) with score 0.998\n - right_knee: (306.07, 393.59) with score 1.000\n - left_ankle: (413.48, 354.16) with score 1.000\n - right_ankle: (445.30, 488.11) with score 0.999\nFound person at [-1.46, 50.68, 160.66, 371.74] with score 0.780\n - nose: (80.17, 81.16) with score 0.570\n - left_eye: (85.17, 75.45) with score 0.383\n - right_eye: (70.20, 77.09) with score 0.382\n - left_shoulder: (121.30, 114.98) with score 0.981\n - right_shoulder: (46.56, 114.41) with score 0.981\n - left_elbow: (144.09, 163.76) with score 0.777\n - right_elbow: (29.69, 159.24) with score 0.886\n - left_wrist: (142.31, 205.64) with score 0.725\n - right_wrist: (6.24, 199.62) with score 0.876\n - left_hip: (108.07, 208.90) with score 0.992\n - right_hip: (64.72, 212.01) with score 0.996\n - left_knee: (115.26, 276.52) with score 0.998\n - right_knee: (65.09, 283.25) with score 0.998\n - left_ankle: (126.09, 340.42) with score 0.991\n - right_ankle: (63.88, 348.88) with score 0.977\nFound person at [526.35, 36.25, 650.42, 280.90] with score 0.328\n - nose: (554.06, 71.87) with score 0.901\n - left_eye: (562.10, 66.30) with score 0.928\n - right_eye: (546.65, 66.36) with score 0.746\n - left_ear: (575.98, 68.17) with score 0.658\n - left_shoulder: (588.04, 102.61) with score 0.999\n - right_shoulder: (526.00, 102.94) with score 0.704\n - left_elbow: (618.11, 149.18) with score 0.984\n - left_wrist: (630.77, 189.42) with score 0.961\n - left_hip: (578.74, 181.42) with score 0.966\n - right_hip: (530.33, 176.46) with score 0.698\n - left_knee: (568.74, 233.01) with score 0.958\n - right_knee: (542.44, 243.87) with score 0.687\n - left_ankle: (585.17, 284.79) with score 0.838\n - right_ankle: (550.07, 292.19) with score 0.435\n```\n\n</details>"
{"license": "apache-2.0", "library_name": "transformers.js", "tags": ["pose-estimation"]}
Xenova/RTMO-t
null
[ "transformers.js", "onnx", "rtmo", "pose-estimation", "license:apache-2.0", "region:us" ]
null
2024-04-26T11:12:42+00:00
null
transformers.js
ERROR: type should be string, got "\n\nhttps://github.com/open-mmlab/mmpose/tree/main/projects/rtmo with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform pose-estimation w/ `Xenova/RTMO-s`.\n\n```js\nimport { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';\n\n// Load model and processor\nconst model_id = 'Xenova/RTMO-s';\nconst model = await AutoModel.from_pretrained(model_id);\nconst processor = await AutoProcessor.from_pretrained(model_id);\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';\nconst image = await RawImage.read(url);\nconst { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);\n\n// Predict bounding boxes and keypoints\nconst { dets, keypoints } = await model({ input: pixel_values });\n\n// Select the first image\nconst predicted_boxes = dets.tolist()[0];\nconst predicted_points = keypoints.tolist()[0];\nconst [height, width] = original_sizes[0];\nconst [resized_height, resized_width] = reshaped_input_sizes[0];\n\n// Compute scale values\nconst xScale = width / resized_width;\nconst yScale = height / resized_height;\n\n// Define thresholds\nconst point_threshold = 0.3;\nconst box_threshold = 0.3;\n\n// Display results\nfor (let i = 0; i < predicted_boxes.length; ++i) {\n const [xmin, ymin, xmax, ymax, box_score] = predicted_boxes[i];\n if (box_score < box_threshold) continue;\n\n const x1 = (xmin * xScale).toFixed(2);\n const y1 = (ymin * yScale).toFixed(2);\n const x2 = (xmax * xScale).toFixed(2);\n const y2 = (ymax * yScale).toFixed(2);\n\n console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${box_score.toFixed(3)}`)\n const points = predicted_points[i]; // of shape [17, 3]\n for (let id = 0; id < points.length; ++id) {\n const label = model.config.id2label[id];\n const [x, y, point_score] = points[id];\n if (point_score < point_threshold) continue;\n console.log(` - ${label}: (${(x * xScale).toFixed(2)}, ${(y * yScale).toFixed(2)}) with score ${point_score.toFixed(3)}`);\n }\n}\n```\n\n<details>\n\n<summary>See example output</summary>\n\n```\nFound person at [423.33, 55.52, 644.28, 504.13] with score 0.988\n - nose: (527.30, 117.12) with score 0.733\n - left_eye: (541.79, 109.26) with score 0.554\n - right_eye: (515.04, 107.59) with score 0.475\n - left_shoulder: (563.30, 171.75) with score 1.000\n - right_shoulder: (464.21, 159.75) with score 1.000\n - left_elbow: (575.71, 238.04) with score 0.998\n - right_elbow: (436.06, 218.10) with score 0.999\n - left_wrist: (605.86, 303.35) with score 1.000\n - right_wrist: (497.47, 220.82) with score 1.000\n - left_hip: (540.97, 307.31) with score 1.000\n - right_hip: (475.85, 318.78) with score 1.000\n - left_knee: (578.63, 368.63) with score 1.000\n - right_knee: (501.05, 442.49) with score 1.000\n - left_ankle: (572.11, 464.96) with score 0.991\n - right_ankle: (535.75, 441.52) with score 0.981\nFound person at [89.97, 3.96, 517.81, 507.28] with score 0.966\n - left_shoulder: (242.65, 111.06) with score 0.999\n - right_shoulder: (228.79, 112.54) with score 0.999\n - left_elbow: (321.84, 169.07) with score 0.999\n - right_elbow: (225.76, 218.20) with score 1.000\n - left_wrist: (351.73, 220.74) with score 0.999\n - right_wrist: (160.19, 228.03) with score 1.000\n - left_hip: (342.34, 246.81) with score 1.000\n - right_hip: (360.05, 259.35) with score 0.999\n - left_knee: (299.56, 377.97) with score 0.998\n - right_knee: (313.81, 378.83) with score 0.976\n - left_ankle: (443.84, 312.35) with score 0.983\n - right_ankle: (424.74, 433.61) with score 0.823\nFound person at [-0.53, 51.78, 153.65, 371.17] with score 0.769\n - nose: (75.52, 85.67) with score 0.363\n - left_shoulder: (121.54, 113.17) with score 1.000\n - right_shoulder: (49.77, 117.60) with score 1.000\n - left_elbow: (132.90, 147.02) with score 0.932\n - right_elbow: (30.31, 156.42) with score 0.992\n - left_wrist: (154.43, 162.08) with score 0.871\n - right_wrist: (17.20, 196.43) with score 0.943\n - left_hip: (105.61, 204.27) with score 0.999\n - right_hip: (61.99, 203.66) with score 0.999\n - left_knee: (114.70, 270.91) with score 1.000\n - right_knee: (63.75, 275.33) with score 1.000\n - left_ankle: (125.53, 342.00) with score 0.998\n - right_ankle: (63.16, 344.07) with score 0.997\nFound person at [519.40, 34.94, 650.11, 312.07] with score 0.488\n - nose: (554.82, 76.58) with score 0.920\n - left_eye: (563.12, 69.41) with score 0.666\n - right_eye: (544.82, 70.01) with score 0.595\n - left_shoulder: (596.60, 105.61) with score 0.999\n - right_shoulder: (523.29, 107.31) with score 0.969\n - left_elbow: (625.14, 151.30) with score 0.999\n - right_elbow: (515.96, 147.59) with score 0.322\n - left_wrist: (630.90, 196.91) with score 0.998\n - right_wrist: (520.75, 181.83) with score 0.415\n - left_hip: (583.24, 200.84) with score 0.998\n - right_hip: (533.69, 200.01) with score 0.978\n - left_knee: (583.79, 265.14) with score 0.934\n - right_knee: (538.27, 262.98) with score 0.669\n - left_ankle: (584.90, 309.76) with score 0.489\n```\n\n</details>"
{"license": "apache-2.0", "library_name": "transformers.js", "tags": ["pose-estimation"]}
Xenova/RTMO-s
null
[ "transformers.js", "onnx", "rtmo", "pose-estimation", "license:apache-2.0", "region:us" ]
null
2024-04-26T11:12:44+00:00
null
transformers.js
ERROR: type should be string, got "\n\nhttps://github.com/open-mmlab/mmpose/tree/main/projects/rtmo with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform pose-estimation w/ `Xenova/RTMO-m`.\n\n```js\nimport { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';\n\n// Load model and processor\nconst model_id = 'Xenova/RTMO-m';\nconst model = await AutoModel.from_pretrained(model_id);\nconst processor = await AutoProcessor.from_pretrained(model_id);\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';\nconst image = await RawImage.read(url);\nconst { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);\n\n// Predict bounding boxes and keypoints\nconst { dets, keypoints } = await model({ input: pixel_values });\n\n// Select the first image\nconst predicted_boxes = dets.tolist()[0];\nconst predicted_points = keypoints.tolist()[0];\nconst [height, width] = original_sizes[0];\nconst [resized_height, resized_width] = reshaped_input_sizes[0];\n\n// Compute scale values\nconst xScale = width / resized_width;\nconst yScale = height / resized_height;\n\n// Define thresholds\nconst point_threshold = 0.3;\nconst box_threshold = 0.4;\n\n// Display results\nfor (let i = 0; i < predicted_boxes.length; ++i) {\n const [xmin, ymin, xmax, ymax, box_score] = predicted_boxes[i];\n if (box_score < box_threshold) continue;\n\n const x1 = (xmin * xScale).toFixed(2);\n const y1 = (ymin * yScale).toFixed(2);\n const x2 = (xmax * xScale).toFixed(2);\n const y2 = (ymax * yScale).toFixed(2);\n\n console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${box_score.toFixed(3)}`)\n const points = predicted_points[i]; // of shape [17, 3]\n for (let id = 0; id < points.length; ++id) {\n const label = model.config.id2label[id];\n const [x, y, point_score] = points[id];\n if (point_score < point_threshold) continue;\n console.log(` - ${label}: (${(x * xScale).toFixed(2)}, ${(y * yScale).toFixed(2)}) with score ${point_score.toFixed(3)}`);\n }\n}\n```\n\n<details>\n\n<summary>See example output</summary>\n\n```\nFound person at [394.23, 54.52, 676.59, 509.93] with score 0.977\n - nose: (521.88, 120.59) with score 0.692\n - left_eye: (536.24, 109.29) with score 0.635\n - right_eye: (511.85, 107.62) with score 0.651\n - left_shoulder: (561.11, 171.55) with score 0.993\n - right_shoulder: (471.06, 157.17) with score 0.999\n - left_elbow: (574.33, 240.08) with score 0.993\n - right_elbow: (437.67, 219.04) with score 0.998\n - left_wrist: (605.09, 310.85) with score 0.996\n - right_wrist: (496.67, 218.61) with score 0.993\n - left_hip: (537.65, 305.16) with score 1.000\n - right_hip: (475.64, 313.71) with score 1.000\n - left_knee: (581.28, 366.44) with score 1.000\n - right_knee: (506.58, 432.27) with score 0.996\n - left_ankle: (575.49, 470.17) with score 0.999\n - right_ankle: (534.34, 442.35) with score 0.994\nFound person at [65.64, -3.94, 526.84, 538.72] with score 0.947\n - left_shoulder: (224.52, 111.13) with score 0.996\n - right_shoulder: (212.09, 110.60) with score 0.998\n - left_elbow: (322.33, 170.98) with score 0.998\n - right_elbow: (235.17, 223.79) with score 1.000\n - left_wrist: (389.08, 222.90) with score 0.997\n - right_wrist: (162.75, 228.10) with score 0.998\n - left_hip: (365.58, 242.19) with score 1.000\n - right_hip: (327.40, 255.20) with score 1.000\n - left_knee: (313.14, 376.06) with score 1.000\n - right_knee: (336.28, 393.63) with score 1.000\n - left_ankle: (428.03, 347.03) with score 1.000\n - right_ankle: (434.31, 510.29) with score 0.992\nFound person at [-0.88, 48.03, 182.29, 381.19] with score 0.787\n - nose: (72.50, 83.26) with score 0.606\n - left_eye: (81.11, 76.66) with score 0.627\n - right_eye: (64.49, 77.73) with score 0.641\n - left_ear: (95.29, 78.63) with score 0.513\n - left_shoulder: (114.15, 109.26) with score 0.918\n - right_shoulder: (46.66, 115.12) with score 0.988\n - left_elbow: (131.40, 160.25) with score 0.351\n - right_elbow: (26.67, 159.11) with score 0.934\n - right_wrist: (6.60, 201.80) with score 0.681\n - left_hip: (110.48, 206.96) with score 0.998\n - right_hip: (60.89, 199.41) with score 0.997\n - left_knee: (118.23, 272.23) with score 0.999\n - right_knee: (66.52, 273.32) with score 0.994\n - left_ankle: (129.82, 346.46) with score 0.999\n - right_ankle: (60.40, 349.13) with score 0.995\nFound person at [512.82, 31.30, 662.28, 314.57] with score 0.451\n - nose: (550.07, 74.26) with score 0.766\n - left_eye: (558.96, 67.14) with score 0.955\n - right_eye: (541.52, 68.23) with score 0.783\n - left_ear: (575.04, 67.61) with score 0.952\n - left_shoulder: (589.39, 102.33) with score 0.996\n - right_shoulder: (511.02, 103.00) with score 0.699\n - left_elbow: (626.71, 148.71) with score 0.997\n - left_wrist: (633.15, 200.33) with score 0.982\n - left_hip: (580.00, 181.21) with score 0.994\n - right_hip: (524.41, 184.62) with score 0.849\n - left_knee: (594.99, 244.95) with score 0.977\n - right_knee: (533.72, 246.37) with score 0.504\n - left_ankle: (598.47, 294.18) with score 0.844\n```\n\n</details>"
{"license": "apache-2.0", "library_name": "transformers.js", "tags": ["pose-estimation"]}
Xenova/RTMO-m
null
[ "transformers.js", "onnx", "rtmo", "pose-estimation", "license:apache-2.0", "region:us" ]
null
2024-04-26T11:12:46+00:00
null
transformers.js
ERROR: type should be string, got "\n\nhttps://github.com/open-mmlab/mmpose/tree/main/projects/rtmo with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform pose-estimation w/ `Xenova/RTMO-l`.\n\n```js\nimport { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';\n\n// Load model and processor\nconst model_id = 'Xenova/RTMO-l';\nconst model = await AutoModel.from_pretrained(model_id);\nconst processor = await AutoProcessor.from_pretrained(model_id);\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';\nconst image = await RawImage.read(url);\nconst { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);\n\n// Predict bounding boxes and keypoints\nconst { dets, keypoints } = await model({ input: pixel_values });\n\n// Select the first image\nconst predicted_boxes = dets.tolist()[0];\nconst predicted_points = keypoints.tolist()[0];\nconst [height, width] = original_sizes[0];\nconst [resized_height, resized_width] = reshaped_input_sizes[0];\n\n// Compute scale values\nconst xScale = width / resized_width;\nconst yScale = height / resized_height;\n\n// Define thresholds\nconst point_threshold = 0.3;\nconst box_threshold = 0.3;\n\n// Display results\nfor (let i = 0; i < predicted_boxes.length; ++i) {\n const [xmin, ymin, xmax, ymax, box_score] = predicted_boxes[i];\n if (box_score < box_threshold) continue;\n\n const x1 = (xmin * xScale).toFixed(2);\n const y1 = (ymin * yScale).toFixed(2);\n const x2 = (xmax * xScale).toFixed(2);\n const y2 = (ymax * yScale).toFixed(2);\n\n console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${box_score.toFixed(3)}`)\n const points = predicted_points[i]; // of shape [17, 3]\n for (let id = 0; id < points.length; ++id) {\n const label = model.config.id2label[id];\n const [x, y, point_score] = points[id];\n if (point_score < point_threshold) continue;\n console.log(` - ${label}: (${(x * xScale).toFixed(2)}, ${(y * yScale).toFixed(2)}) with score ${point_score.toFixed(3)}`);\n }\n}\n```\n\n<details>\n\n<summary>See example output</summary>\n\n```\nFound person at [400.13, 66.05, 657.48, 496.67] with score 0.978\n - nose: (520.40, 118.17) with score 0.445\n - left_eye: (531.83, 111.10) with score 0.350\n - left_shoulder: (559.65, 168.66) with score 0.999\n - right_shoulder: (469.70, 160.04) with score 0.999\n - left_elbow: (573.20, 237.82) with score 1.000\n - right_elbow: (438.51, 218.06) with score 0.999\n - left_wrist: (604.74, 308.75) with score 0.999\n - right_wrist: (495.52, 219.24) with score 0.999\n - left_hip: (537.36, 306.24) with score 1.000\n - right_hip: (477.61, 314.79) with score 0.998\n - left_knee: (576.44, 360.67) with score 1.000\n - right_knee: (500.26, 448.33) with score 0.997\n - left_ankle: (575.94, 461.43) with score 0.998\n - right_ankle: (525.18, 436.10) with score 0.996\nFound person at [84.74, 11.57, 524.53, 535.62] with score 0.970\n - left_shoulder: (240.00, 106.15) with score 0.998\n - right_shoulder: (230.72, 131.27) with score 0.999\n - left_elbow: (319.58, 164.42) with score 0.999\n - right_elbow: (232.16, 226.10) with score 1.000\n - left_wrist: (390.95, 220.65) with score 0.999\n - right_wrist: (157.61, 227.93) with score 0.999\n - left_hip: (363.29, 249.14) with score 1.000\n - right_hip: (337.65, 250.50) with score 1.000\n - left_knee: (297.35, 368.55) with score 1.000\n - right_knee: (328.29, 390.84) with score 1.000\n - left_ankle: (433.81, 343.83) with score 0.999\n - right_ankle: (452.74, 504.60) with score 0.995\nFound person at [-4.11, 53.42, 174.91, 372.64] with score 0.644\n - nose: (74.67, 84.38) with score 0.375\n - left_shoulder: (114.29, 113.60) with score 0.991\n - right_shoulder: (44.21, 117.73) with score 0.989\n - left_elbow: (124.69, 159.42) with score 0.978\n - right_elbow: (26.54, 154.78) with score 0.995\n - left_wrist: (132.86, 168.78) with score 0.957\n - right_wrist: (6.44, 195.67) with score 0.986\n - left_hip: (98.90, 199.49) with score 0.978\n - right_hip: (62.77, 200.49) with score 0.976\n - left_knee: (111.91, 277.06) with score 0.998\n - right_knee: (65.08, 276.40) with score 0.999\n - left_ankle: (128.95, 344.65) with score 0.973\n - right_ankle: (63.55, 345.60) with score 0.992\nFound person at [511.40, 32.53, 658.71, 345.63] with score 0.384\n - nose: (554.88, 74.25) with score 0.796\n - left_eye: (563.64, 68.39) with score 0.716\n - right_eye: (547.38, 68.22) with score 0.542\n - left_ear: (575.42, 72.40) with score 0.324\n - left_shoulder: (576.47, 105.27) with score 0.999\n - right_shoulder: (531.19, 105.55) with score 0.956\n - left_elbow: (623.35, 151.54) with score 0.999\n - right_elbow: (549.79, 144.36) with score 0.387\n - left_wrist: (631.33, 198.37) with score 0.991\n - right_wrist: (547.36, 162.58) with score 0.486\n - left_hip: (578.36, 192.67) with score 0.989\n - right_hip: (555.21, 188.00) with score 0.925\n - left_knee: (604.56, 239.95) with score 0.977\n - right_knee: (545.23, 221.37) with score 0.952\n - left_ankle: (587.82, 323.26) with score 0.401\n - right_ankle: (546.77, 322.69) with score 0.846\n```\n\n</details>"
{"license": "apache-2.0", "library_name": "transformers.js", "tags": ["pose-estimation"]}
Xenova/RTMO-l
null
[ "transformers.js", "onnx", "rtmo", "pose-estimation", "license:apache-2.0", "region:us" ]
null
2024-04-26T11:12:49+00:00
text-classification
transformers
{"license": "mit"}
MilosKosRad/TextualEntailment_DeBERTa_preprocessedSciFACT
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:13:05+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
Anas989898/llama-3-8b-it-codeact-v0.1
null
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:13:26+00:00
text-to-image
null
## Model ![pipeline](00018-3129638533.png)
{"tags": ["stable-diffusion", "text-to-image", "StableDiffusionPipeline", "lora"]}
fearvel/lloyd-de-saloum-pony-v1
null
[ "stable-diffusion", "text-to-image", "StableDiffusionPipeline", "lora", "region:us" ]
null
2024-04-26T11:15:18+00:00
text-generation
transformers
{}
jfaltermeier/llama3-theia-workshop-johannes
null
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:15:51+00:00
text-generation
transformers
{}
sgraband/llama3-theia-workshop-simon
null
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:15:53+00:00
null
null
{}
ivykopal/cssquad_prompt
null
[ "region:us" ]
null
2024-04-26T11:16:59+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ustunek/gpt-2-doctor-eng
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:17:12+00:00
null
null
{}
ivykopal/cssquad_adapter
null
[ "region:us" ]
null
2024-04-26T11:17:30+00:00
null
null
{}
GraydientPlatformAPI/loras-april26
null
[ "region:us" ]
null
2024-04-26T11:18:18+00:00
null
null
{"license": "openrail"}
48xrf/robert
null
[ "license:openrail", "region:us" ]
null
2024-04-26T11:18:57+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speaker-segmentation-fine-tuned-voxconverse-en This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/voxconverse dataset. It achieves the following results on the evaluation set: - Loss: 1.1250 - Der: 0.8257 - False Alarm: 0.3733 - Missed Detection: 0.3995 - Confusion: 0.0528 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion | |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:| | 0.9302 | 1.0 | 791 | 0.9903 | 0.6790 | 0.5013 | 0.0965 | 0.0812 | | 0.8848 | 2.0 | 1582 | 1.0536 | 0.7965 | 0.3991 | 0.3409 | 0.0565 | | 0.8513 | 3.0 | 2373 | 1.0884 | 0.8114 | 0.4017 | 0.3528 | 0.0569 | | 0.7926 | 4.0 | 3164 | 1.1292 | 0.8378 | 0.3660 | 0.4219 | 0.0500 | | 0.8147 | 5.0 | 3955 | 1.1250 | 0.8257 | 0.3733 | 0.3995 | 0.0528 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/voxconverse"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-voxconverse-en", "results": []}]}
tgrhn/speaker-segmentation-fine-tuned-voxconverse-en
null
[ "transformers", "tensorboard", "safetensors", "pyannet", "speaker-diarization", "speaker-segmentation", "generated_from_trainer", "dataset:diarizers-community/voxconverse", "base_model:pyannote/segmentation-3.0", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:20:59+00:00
null
null
{}
ivykopal/english_prompt_squad_prompt_100
null
[ "region:us" ]
null
2024-04-26T11:21:03+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1340 - Precision: 0.9582 - Recall: 0.9500 - F1: 0.9541 - Accuracy: 0.9499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1595 | 0.5 | 7000 | 0.1539 | 0.9469 | 0.9377 | 0.9423 | 0.9375 | | 0.1497 | 0.99 | 14000 | 0.1383 | 0.9549 | 0.9418 | 0.9483 | 0.9437 | | 0.1185 | 1.49 | 21000 | 0.1314 | 0.9557 | 0.9464 | 0.9510 | 0.9467 | | 0.1153 | 1.99 | 28000 | 0.1306 | 0.9553 | 0.9503 | 0.9528 | 0.9487 | | 0.0977 | 2.49 | 35000 | 0.1340 | 0.9582 | 0.9500 | 0.9541 | 0.9499 | | 0.0948 | 2.98 | 42000 | 0.1325 | 0.9584 | 0.9512 | 0.9548 | 0.9506 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-finetuned-ner", "results": []}]}
Sevixdd/bert-base-uncased-finetuned-ner
null
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:21:12+00:00
null
null
{}
ivykopal/sksquad_adapter
null
[ "region:us" ]
null
2024-04-26T11:21:49+00:00
text-generation
transformers
# ai-playground The repo currently consists out of - [forum-gpt/data-creation](/forum-gpt/data-creation/): a package for data creation and manipulation - [forum-gpt/evaluation-app](/forum-gpt/evaluation-app/): a simple evaluation app - [forum-gpt/training](forum-gpt/training/): saved axolotl training configurations ## Setup Use Node `>= 20` with npm `>= 10`. ```bash npm ci ``` ## Quick start Evaluation App Set `OPEN_API_KEY` in your environment variables. You can set an arbitrary value like `foobar` in case you don't intend to use Open AI's GPT models, e.g. `export OPEN_API_KEY=foobar`. Configure the models to chat with in [`bots.config.json`](/forum-gpt/evaluation-app/backend/bots.config.json) ```bash npm run build npm run start ``` Open app at [localhost:5173](http://localhost:5173/). ### Deploy a model in Runpod The Evaluation App works against Open AI's API. We recommend [`vllm`](https://github.com/vllm-project/vllm) for deploying own models. A simple configuration may look like this: - Docker Image Name: `vllm/vllm-openai:latest` - Container Start Command: `--model mistralai/Mistral-7B-Instruct-v0.1`. - The model name can be derived from [HuggingFace](https://huggingface.co/) - In case you are using a private model, add an environment variable named `HUGGING_FACE_HUB_TOKEN` to your pod with your token - Expose HTTP Ports: `8000` - Disk sizes: Whatever is appropriate, e.g. 2x `50` GB - Volume Mount Path: `/root/.cache/huggingface`. - Recommended mount when using vllm images to avoid downloading the model whenever the pod is restarted Use [this Runpod link](https://www.runpod.io/console/deploy?template=n338mcq81p) to start with a configuration for Mistral-7B-Instruct-v0.2 model. You can use "Edit Pod Template" to adjust the template before using it. Once the pod is started the first time, it will get a random id assigned by Runpod, e.g. `g9q3ycbfk2yorr`. Configure the pod in [`bots.config.json`](/forum-gpt/evaluation-app/backend/bots.config.json) - `id` must be unique between pods - `type: runpod` - `modelId` must be the same as used in the Container Start Command above - `runpodId` is the id assigned by Runpod In case of `Mistral` based models, disable the system prompt with `systemPrompt: null` as these models don't support it.
{}
jfaltermeier/llama3-theia-workshop-johannes-with-config
null
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:21:54+00:00
text-generation
transformers
*There currently is an issue with the **model generating random reserved special tokens (like "<|reserved_special_token_49|>") at the end**. Please use with `skip_special_tokens=true`. We will update once we found the reason for this behaviour. If you found a solution, please let us know!* # Llama 3 DiscoLM German 8b v0.1 Experimental <p align="center"><img src="/DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental/resolve/main/disco_llama.webp" width="400"></p> # Introduction **Llama 3 DiscoLM German 8b v0.1 Experimental** is an experimental Llama 3 based version of [DiscoLM German](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1). This is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future. Please find a online Demo [here](https://364b61f772fa7baacb.gradio.live/) (we may take this offline for updates). # Prompt Format DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. ``` <|im_start|>system Du bist ein hilfreicher Assistent.<|im_end|> <|im_start|>user Wer bist du?<|im_end|> <|im_start|>assistant Ich bin ein Sprachmodell namens DiscoLM German und ich wurde von DiscoResearch trainiert.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "Du bist ein hilfreicher Assistent."}, {"role": "user", "content": "Wer bist du?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. # Example Code for Inference ```python model_id = "DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "Du bist ein hilfreicher Assistent."}, {"role": "user", "content": "Wer bist du?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` # Limitations & Biases This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution. # License This model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see [LICENSE](LICENSE) for more information. # Acknowledgements Built with Meta Llama 3. DiscoLM German is a [DiscoResearch](https://huggingface.co/DiscoResearch) project, a collective effort by [JP Harries](https://huggingface.co/jphme), [Björn Plüster](https://huggingface.co/bjoernp) and [Daniel Auras](https://huggingface.co/rasdani). Development of Llama 3 DiscoLM German 8b was sponsored by [ellamind](https://ellamind.com). Compute was sponsored generously by [sysGen GmbH](https://www.sysgen.de/). [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # About DiscoResearch DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our [Discord](https://discord.gg/ttNdas89f3), share your opinions and ideas, and advance open LLM research with us! # Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place.
{"library_name": "transformers", "tags": []}
mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-GPTQ
null
[ "transformers", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T11:22:05+00:00
null
null
{}
zhenchuan/text
null
[ "region:us" ]
null
2024-04-26T11:24:03+00:00
null
null
{}
ivykopal/squad_prompt_100
null
[ "region:us" ]
null
2024-04-26T11:24:15+00:00
null
null
{}
bakkensus/llama-3-silver-gguf
null
[ "gguf", "region:us" ]
null
2024-04-26T11:24:32+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
nextab/Athena-v1.0-sft
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:24:46+00:00
text-generation
transformers
{}
mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-GPTQ
null
[ "transformers", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T11:24:57+00:00
null
null
The wonderful ToolsBaer OLM to EML Conversion software makes importing Mac Outlook OLM files into EML file formats simple and quick. Software that converts OLM to EML files can handle OLM files of any size or type effortlessly. One of its greatest benefits is its ability to easily import OLM files into a specific EML file without any problems or additional software installation. The tool's basic design makes it easy to use even by non-technical persons. All emails and attachments from OLM files have been properly converted to EML files and are 100% accurate. Without changing any files, the transfer is finished by the OLM to EML Conversion. There is a guarantee that the application will update every OLM email feature, including to, cc, bcc, from, sent, date, time, and others. Users can use the software's trial version to convert the first 10 emails from each folder. This application works with all Windows 11,10, 8, 8.1, 7, XP, and Vista versions. It costs free to install and use the program for everyone. Read MOre:- http://www.toolsbaer.com/olm-to-eml-conversion/
{}
madelineoliver/ToolsBaer-OLM-to-EML-Conversion
null
[ "region:us" ]
null
2024-04-26T11:25:45+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "lmsys/vicuna-7b-v1.5"}
tt1225/aic24-track2-multiview-videollava-7b-lora
null
[ "peft", "safetensors", "llava_llama", "arxiv:1910.09700", "base_model:lmsys/vicuna-7b-v1.5", "4-bit", "region:us" ]
null
2024-04-26T11:26:38+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jd0g/Mistral-7B-NLI-v0.3
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:27:23+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tutuhu/style6
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:27:24+00:00
text-generation
transformers
{}
abuzila/llama3-theia-workshop-alex
null
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:29:39+00:00
text-to-image
diffusers
{}
nncyberpunk/SDXL1.0_PonyDiffusionXL_6
null
[ "diffusers", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-26T11:30:38+00:00
null
null
{}
rmarcosg/bark-detection-whisper
null
[ "region:us" ]
null
2024-04-26T11:30:40+00:00
text-generation
transformers
Approach: The TextSimpleCategoryLLM model is a GPT-2 based language model trained to generate text responses based on input prompts, focusing on a simple categorization task. The model utilizes the GPT-2 architecture, fine-tuned on a dataset consisting of text prompts paired with corresponding categories. During training, the model learns to generate text that aligns with the specified category, enabling it to provide relevant information within the given context. This approach facilitates tasks such as text completion and question answering within defined categories, offering users a straightforward and effective tool for generating context-aware text responses. Trained with: label category count 32998 32998 unique 31872 3 freq 20 1299
{"language": ["en"], "license": "apache-2.0", "datasets": ["AkilanSelvam/text-simple-categorization"]}
AkilanSelvam/spinsnow-problem-categorizer
null
[ "transformers", "safetensors", "gpt2", "text-generation", "en", "dataset:AkilanSelvam/text-simple-categorization", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:31:32+00:00
null
null
{}
krushnakant27/bert-finetuned-squad
null
[ "region:us" ]
null
2024-04-26T11:32:09+00:00
text-generation
transformers
Quantizations of https://huggingface.co/jeiku/Foundation_3B # From original readme This is a big step forward for 3B class models. Trained on smol PIPPA, alpaca-cleaned, and two custom datasets, and based on https://huggingface.co/jeiku/Rosa_v3_3B This should serve as a decent fiction model, though it also excels at roleplaying, but is not an ideal model for logical queries or riddles.
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "Foundation_3B"], "pipeline_tag": "text-generation", "inference": false}
duyntnet/Foundation_3B-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "Foundation_3B", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-26T11:33:19+00:00
text-generation
transformers
{}
tseidel/qwen-theia-workshop
null
[ "transformers", "pytorch", "qwen2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:33:31+00:00
null
null
{}
CHANDNI666/wav2vec2-large-xlsr-53-odia-colab
null
[ "region:us" ]
null
2024-04-26T11:33:40+00:00
text-generation
transformers
{}
ifurnadjiev/qwen-theia-workshop
null
[ "transformers", "pytorch", "qwen2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:33:41+00:00
null
null
{}
kevlar4/finetuned_bart
null
[ "safetensors", "region:us" ]
null
2024-04-26T11:34:15+00:00
null
null
{}
CHRUNIKA/enhypen_ni-ki_by_seulisk
null
[ "region:us" ]
null
2024-04-26T11:34:23+00:00
null
null
{}
arpansharma/face-swap
null
[ "region:us" ]
null
2024-04-26T11:36:53+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speaker-segmentation-fine-tuned-callhome-eng This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset. It achieves the following results on the evaluation set: - Loss: 0.4570 - Der: 0.1803 - False Alarm: 0.0556 - Missed Detection: 0.0731 - Confusion: 0.0516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion | |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:| | 0.4257 | 1.0 | 362 | 0.4789 | 0.1918 | 0.0573 | 0.0786 | 0.0559 | | 0.3889 | 2.0 | 724 | 0.4660 | 0.1866 | 0.0556 | 0.0760 | 0.0549 | | 0.3758 | 3.0 | 1086 | 0.4587 | 0.1807 | 0.0548 | 0.0755 | 0.0503 | | 0.3643 | 4.0 | 1448 | 0.4564 | 0.1805 | 0.0555 | 0.0734 | 0.0515 | | 0.3511 | 5.0 | 1810 | 0.4570 | 0.1803 | 0.0556 | 0.0731 | 0.0516 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng", "results": []}]}
tgrhn/speaker-segmentation-fine-tuned-callhome-eng
null
[ "transformers", "tensorboard", "safetensors", "pyannet", "speaker-diarization", "speaker-segmentation", "generated_from_trainer", "dataset:diarizers-community/callhome", "base_model:pyannote/segmentation-3.0", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:37:07+00:00
null
null
{}
nadejdatarabukina/distraining2
null
[ "region:us" ]
null
2024-04-26T11:37:26+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
antonyo94/Enlighten_Instruct
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-26T11:38:22+00:00
null
null
{}
Stuka88/Prueba
null
[ "region:us" ]
null
2024-04-26T11:38:29+00:00
text-generation
transformers
![SauerkrautLM](https://vago-solutions.ai/wp-content/uploads/2024/04/Llama3-Pic.png "Llama-3-SauerkrautLM-8b-Instruct") ## VAGO solutions Llama-3-SauerkrautLM-8b-Instruct Introducing **Llama-3-SauerkrautLM-8b-Instruct** – our Sauerkraut version of the powerful [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)! The model **Llama-3-SauerkrautLM-8b-Instruct** is a **joint effort** between **VAGO Solutions** and **Hyperspace.ai.** - Aligned with **DPO** # Table of Contents 1. [Overview of all Llama-3-SauerkrautLM-8b-Instruct](#all-Llama-3-SauerkrautLM-8b-Instruct) 2. [Model Details](#model-details) - [Prompt template](#prompt-template) - [Training procedure](#proceed-of-the-training) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-llama-3-8B-Instruct | Model | HF | EXL2 | GGUF | AWQ | |-------|-------|-------|-------|-------| | Llama-3-SauerkrautLM-8b-Instruct | [Link](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) | [Link](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2) | [Link](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-GGUF) | coming soon | ## Model Details **SauerkrautLM-llama-3-8B-Instruct** - **Model Type:** Llama-3-SauerkrautLM-8b-Instruct is a finetuned Model based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - **Language(s):** German, English - **License:** [meta-llama](https://llama.meta.com/llama3/license) - **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/) ### Training procedure: - We trained this model in a two staged DPO Fine-Tuning for 1 epoch with 70k data and another epoch with 20k data. - LaserRMT version coming soon **We improved the model's capabilities noticably by feeding it with curated German data.** ### Prompt Template: **English:** ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|> Input<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` **German:** ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Du bist ein freundlicher und hilfreicher deutscher KI-Assistent.<|eot_id|><|start_header_id|>user<|end_header_id|> Input<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ### Example output of german language: ``` Prompt: Erkläre mir die Vorteile für das Fine-Tuning von Meta's LLama Modellen. Antwort: Sehr gern! Fine-tuning von Meta's LLaMA-Modellen ist eine sehr effektive Möglichkeit, um Ihre Anwendung oder Ihr Projekt mit einer hochentwickelten Sprachverarbeitung auszustatten. Die Vorteile des Fine-Tunings von LLaMA-Modellen sind: Adaptation an spezielle Aufgaben: Durch das Fine-Tuning können Sie die Modelle auf Ihre speziellen Bedürfnisse und Anforderungen anpassen. Dies ermöglicht es Ihnen, die Leistung der Modelle in Ihren Anwendungen zu verbessern. Verbesserte Genauigkeit: Die fine-tuned Modelle lernen, sich an die spezifischen Merkmale Ihres Datenbestands anzupassen, was zu einer besseren Genauigkeit bei der Textklassifizierung, -generierung, -summarisierung usw. führt. Anpassung an Branche oder Domain: Wenn Sie beispielsweise ein Medizinprojekt haben, können Sie die Modelle auf medizinische Fachterminologie und -konzepte trainieren, um eine höhere Präzision bei der Analyse medizinischer Texte zu erzielen. Kosten-Effektivität: Im Vergleich zum Training neuer Modelle von Grund auf kann das Fine-Tuning viel kosteneffektiver sein, da Sie nur einen kleinen Teil des Trainingsdatums benötigen, um die Modelle an Ihre spezifischen Anforderungen anzupassen. Schnellere Entwicklung: Das Fine-Tuning ermöglicht es Entwicklern, schnell neue Funktionen und Features in ihre Anwendungen einzufügen, ohne dass sie lange warten müssen, bis neue Modelle trainiert werden. Bessere Interpretierbarkeit: Da die fine-tuned Modelle auf Ihre spezifischen Daten bestimmt wurden, können Sie leichter verstehen, warum bestimmte Entscheidungen getroffen werden, was wichtig ist, wenn es um Transparenz und Verantwortlichkeit geht. Insgesamt bietet das Fine-Tuning von LLaMA-Modellen eine flexible und effektive Möglichkeit, um Ihre Anwendungen und Projekte durch die Integration von fortschrittlichen Sprachmodellen zu verbessern. ``` ## Evaluation **Open LLM Leaderboard:** evaluated with lm-evaluation-benchmark-harness 0.4.2 | Metric | Value | |-----------------------|---------------------------| | Avg. | **74.57** | | ARC (25-shot) | 74.66 | | HellaSwag (10-shot) | 89.60 | | MMLU (5-shot) | 66.55 | | TruthfulQA (0-shot) | 66.32 | | Winogrande (5-shot) | 80.98 | | GSM8K (5-shot) | 69.29 | **MT-Bench English** ``` ########## First turn ########## score model turn Llama-3-SauerkrautLM-8b-Instruct 1 8.15625 ########## Second turn ########## score model turn Llama-3-SauerkrautLM-8b-Instruct 2 7.65 ########## Average ########## score model Llama-3-SauerkrautLM-8b-Instruct 7.903125 * ``` * due to specific instruction training the english MT-Bench score is slightly lower than the original LLama-3-8B-Instruct **MT-Bench German** ``` ########## First turn ########## score model turn Llama-3-SauerkrautLM-8b-Instruct 1 7.675 ########## Second turn ########## score model turn Llama-3-SauerkrautLM-8b-Instruct 2 7.6375 ########## Average ########## score model Llama-3-SauerkrautLM-8b-Instruct 7.65625 ``` **German RAG LLM Evaluation** ``` | Task |Version|Metric|Value| |Stderr| |------------------------------------------------------|------:|------|----:|---|-----:| |all | |acc |0.905|± |0.0086| |community:german_rag_eval:_average:0 | |acc |0.905|± |0.0086| |community:german_rag_eval:choose_context_by_question:0| 0|acc |0.896|± |0.0097| |community:german_rag_eval:choose_question_by_context:0| 0|acc |0.826|± |0.0120| |community:german_rag_eval:context_question_match:0 | 0|acc |0.987|± |0.0036| |community:german_rag_eval:question_answer_match:0 | 0|acc |0.911|± |0.0090| ``` ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions. ## Collaborations We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/) ## Acknowledgement Many thanks to [Meta](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for providing such valuable model to the Open-Source community. Also many thanks to [bartowski](https://huggingface.co/bartowski) for super fast quantification of our Model in GGUF and EXL format.
{"language": ["de", "en"], "license": "other", "tags": ["two stage dpo", "dpo", "hqq"], "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-HQQ
null
[ "transformers", "llama", "text-generation", "two stage dpo", "dpo", "hqq", "conversational", "de", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:40:05+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speaker-segmentation-fine-tuned-callhome-eng-2 This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset. It achieves the following results on the evaluation set: - Loss: 0.4666 - Der: 0.1814 - False Alarm: 0.0552 - Missed Detection: 0.0739 - Confusion: 0.0523 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion | |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:| | 0.4548 | 1.0 | 181 | 0.4943 | 0.1966 | 0.0564 | 0.0811 | 0.0590 | | 0.4171 | 2.0 | 362 | 0.4845 | 0.1951 | 0.0644 | 0.0754 | 0.0552 | | 0.396 | 3.0 | 543 | 0.4633 | 0.1856 | 0.0502 | 0.0825 | 0.0529 | | 0.3856 | 4.0 | 724 | 0.4609 | 0.1843 | 0.0571 | 0.0739 | 0.0534 | | 0.3693 | 5.0 | 905 | 0.4639 | 0.1821 | 0.0531 | 0.0761 | 0.0528 | | 0.3634 | 6.0 | 1086 | 0.4610 | 0.1821 | 0.0588 | 0.0716 | 0.0517 | | 0.3655 | 7.0 | 1267 | 0.4638 | 0.1827 | 0.0566 | 0.0740 | 0.0521 | | 0.3608 | 8.0 | 1448 | 0.4603 | 0.1814 | 0.0567 | 0.0732 | 0.0515 | | 0.3545 | 9.0 | 1629 | 0.4645 | 0.1805 | 0.0530 | 0.0761 | 0.0514 | | 0.3508 | 10.0 | 1810 | 0.4666 | 0.1814 | 0.0552 | 0.0739 | 0.0523 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng-2", "results": []}]}
tgrhn/speaker-segmentation-fine-tuned-callhome-eng-2
null
[ "transformers", "tensorboard", "safetensors", "pyannet", "speaker-diarization", "speaker-segmentation", "generated_from_trainer", "dataset:diarizers-community/callhome", "base_model:pyannote/segmentation-3.0", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:40:23+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama2-mu-23M-1 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3027 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0024 - train_batch_size: 192 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.973 | 0.7429 | 1000 | 0.3027 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "model-index": [{"name": "Llama2-mu-23M-1", "results": []}]}
HachiML/Llama2-mu-23M-1
null
[ "transformers", "safetensors", "mu_llama", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:40:37+00:00
null
null
{"license": "openrail"}
yaldaghazali/yalda
null
[ "license:openrail", "region:us" ]
null
2024-04-26T11:42:12+00:00
null
null
{}
jeliasherrero/dqn-SpaceInvadersNoFrameskip-v4
null
[ "region:us" ]
null
2024-04-26T11:43:39+00:00
null
null
{"license": "mit"}
kike/fito_v2
null
[ "license:mit", "region:us" ]
null
2024-04-26T11:44:02+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - fatimaaa1/model1 <Gallery /> ## Model description These are fatimaaa1/model1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: fatimaaa1/model1/vae. ## Trigger words You should use a bussiness card to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](fatimaaa1/model1/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a bussiness card", "widget": []}
fatimaaa1/model1
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-26T11:44:10+00:00
null
null
{}
bravemindai/codellama-7b-transitional-services-beta-gguf
null
[ "gguf", "region:us" ]
null
2024-04-26T11:44:35+00:00
null
null
# Volt Performance Erfahrungen Deutschland Höhle der löwen Offizielle Website, Kaufen Volt Performance Erfahrungen Deutschland sind Nahrungsergänzungsmittel zur Steigerung der männlichen Vitalität und sexuellen Leistungsfähigkeit. Sie werden aus einer Mischung natürlicher Inhaltsstoffe hergestellt, die für ihre aphrodisierenden und energiesteigernden Eigenschaften bekannt sind. Zu den Hauptbestandteilen gehören typischerweise Kräuter wie Tongkat Ali, Maca-Wurzel und Ginseng, die für ihre Fähigkeit bekannt sind, die Libido zu verbessern, das Energieniveau zu steigern und das allgemeine Wohlbefinden zu unterstützen. ## **[Klicken Sie hier, um jetzt auf der offiziellen Website von Volt Performance zu kaufen](https://capsules24x7.com/volt-performance)** ## Wie funktioniert das Nahrungsergänzungsmittel Volt Male Performance Capsules? Das Nahrungsergänzungsmittel Volt Male Performance Capsules bietet eine umfassende Möglichkeit, mit verschiedenen sexuellen Problemen bei Männern umzugehen. Die V-Kapseln sind reich an Aminosäuren und Pflanzenkonzentraten, um verschiedene Zyklen im Körper wiederzubeleben und zu erweitern, die das sexuelle Wohlbefinden fördern. Einige Volt Male Performance-Kapseln unterstützen die Durchblutung. L-Arginin und L-Citrullin fördern die Bildung von Stickstoffmonoxid, das die Venen lockert und die Blutbildung fördert. Männer benötigen eine ausreichende Durchblutung des Penis, um ihre Erektionsfähigkeit aufrechtzuerhalten. Antriebsschwäche bereitet den meisten Männern Sorgen. Volt Male Performance-Kapseln können Ihren Sexualtrieb durch die Verwendung normaler Gewürze wie Ashwagandha und Maca-Wurzelextrakten steigern. Es wurde klinisch nachgewiesen, dass die beiden alten Gewürze den Testosteronspiegel erhöhen und so den gesunden Moxie-Spiegel wiederherstellen. Ein beeinträchtigter Widerstand kann Sie davon abhalten, im Bett zu handeln. L-Glutathion und verschiedene Nahrungsergänzungsmittel in den Volt Male Performance-Kapseln unterstützen die Krebsprävention und verbessern die Spermienqualität und die Erholung der Peniszellen. Schlechte Schlafqualität und Stress können Sie mürrisch machen. Unkontrollierte Denkweisen halten Sie davon ab, harte Erektionen zu erreichen. Volt Male Performance Capsules enthält nervenberuhigende Nahrungsergänzungsmittel, die die Entspannung und Erholung fördern. Eine bessere Denkweise ermöglicht es Ihnen, die Erektion, das sexuelle Verlangen und die Ausdauer zu erlangen, die Sie im Bett erwarten. Das Nahrungsergänzungsmittel Volt Male Performance Capsules verwendet verschiedene Nahrungsergänzungsmittel, um Ihre gesamte sexuelle Gesundheit zu verbessern. Die Verwendung der normalen V-Kapseln kurbelt die Testosteronbildung an, verbessert die Durchblutung, stärkt die Penisgesundheit und wirkt sich auf Ihre gesamte sexuelle Gesundheit aus. ## Inhaltsstoffe der Volt Male Performance Kapseln Volt Male Performance-Kapseln enthalten normale Inhaltsstoffe, die das männliche Wohlbefinden unterstützen sollen. Die verschiedenen Aminosäuren und Pflanzenextrakte sind in exakten Dosierungen erhältlich und bieten verschiedene medizinische Vorteile. Zu den wichtigsten Befestigungen gehören: L-Arginin: Laut Hersteller des Nahrungsergänzungsmittels Volt Male Performance Capsules wird in der Definition normales L-Arginin verwendet, um die Stickoxidverschmelzung zu unterstützen. Die Stickstoff- und Sauerstoffatome sind für die Erweiterung der Venen von entscheidender Bedeutung und beeinflussen somit den Blutfluss. Verschiedene Untersuchungen zeigen, dass die regelmäßige Einnahme von L-Arginin Ihnen dabei helfen kann, auf Wunsch hochwertige Erektionen zu bekommen. Das semi-fundamentale Amino-Ätzmittel kann Nebenwirkungen leichter, mittelschwerer Erektionsstörungen behandeln, ohne den Kunden Nachwirkungen zu verursachen. L-Glutathion: Volt Male Performance Capsules enthält 50 mg L-Glutathion pro Tag, um die körpereigenen Krebspräventionswerte effektiv zu verbessern. Freie Extremisten können über Wohlbefinden, Spermienqualität und Testosteronbildung zweimal nachdenken. L-Glutathion wirkt sich auf die Endothelfähigkeit aus und kann ED-Probleme bei heranreifenden Männern lindern. L-Citrullin: L-Citrullin fördert die Blutentwicklung bei Männern. Das Aminooxid wird in Arginin umgewandelt, das die Stickoxidbildung ankurbelt. Ideale Stickoxidwerte wirken sich auf die innere Leistungsfähigkeit, den Geisteszustand und die Ruhe des Mannes aus. Der Hersteller von Volt Male Performance Capsules bezieht sich auf ein Konzentrat im Diary of Urology, das besagt, dass L-Citrullin bei Männern mit leichter ED auf die erektile Härte wirken kann. L-Methionin: L-Methionin ist ein starkes Nahrungsergänzungsmittel, das eine gesunde Verdauung fördert und das Energieniveau steigert. Der Hersteller von Volt Male Performance Capsules weist darauf hin, dass Aminosäuren die Entgiftung unterstützen und einer Östrogenüberproduktion bei Männern vorbeugen können. ## **[Klicken Sie hier, um jetzt auf der offiziellen Website von Volt Performance zu kaufen](https://capsules24x7.com/volt-performance)**
{}
VKapseln475/VoltPerformance3
null
[ "region:us" ]
null
2024-04-26T11:44:43+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speaker-segmentation-fine-tuned-callhome-eng-3 This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset. It achieves the following results on the evaluation set: - Loss: 0.4652 - Der: 0.1821 - False Alarm: 0.0597 - Missed Detection: 0.0715 - Confusion: 0.0509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion | |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:| | 0.4563 | 1.0 | 181 | 0.4971 | 0.1973 | 0.0553 | 0.0802 | 0.0617 | | 0.4053 | 2.0 | 362 | 0.4740 | 0.1899 | 0.0604 | 0.0749 | 0.0546 | | 0.3833 | 3.0 | 543 | 0.4636 | 0.1854 | 0.0556 | 0.0766 | 0.0531 | | 0.3738 | 4.0 | 724 | 0.4664 | 0.1830 | 0.0579 | 0.0733 | 0.0518 | | 0.3596 | 5.0 | 905 | 0.4571 | 0.1800 | 0.0558 | 0.0748 | 0.0494 | | 0.3533 | 6.0 | 1086 | 0.4671 | 0.1844 | 0.0629 | 0.0685 | 0.0529 | | 0.3571 | 7.0 | 1267 | 0.4641 | 0.1820 | 0.0594 | 0.0711 | 0.0515 | | 0.3496 | 8.0 | 1448 | 0.4641 | 0.1824 | 0.0596 | 0.0717 | 0.0511 | | 0.3449 | 9.0 | 1629 | 0.4636 | 0.1819 | 0.0591 | 0.0718 | 0.0510 | | 0.3415 | 10.0 | 1810 | 0.4652 | 0.1821 | 0.0597 | 0.0715 | 0.0509 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng-3", "results": []}]}
tgrhn/speaker-segmentation-fine-tuned-callhome-eng-3
null
[ "transformers", "tensorboard", "safetensors", "pyannet", "speaker-diarization", "speaker-segmentation", "generated_from_trainer", "dataset:diarizers-community/callhome", "base_model:pyannote/segmentation-3.0", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:45:11+00:00
text-generation
transformers
*There currently is an issue with the **model generating random reserved special tokens (like "<|reserved_special_token_49|>") at the end**. Please use with `skip_special_tokens=true`. We will update once we found the reason for this behaviour. If you found a solution, please let us know!* # Llama 3 DiscoLM German 8b v0.1 Experimental <p align="center"><img src="disco_llama.webp" width="400"></p> # Introduction **Llama 3 DiscoLM German 8b v0.1 Experimental** is an experimental Llama 3 based version of [DiscoLM German](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1). This is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future. Please find a online Demo [here](https://364b61f772fa7baacb.gradio.live/) (we may take this offline for updates). # Prompt Format DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. ``` <|im_start|>system Du bist ein hilfreicher Assistent.<|im_end|> <|im_start|>user Wer bist du?<|im_end|> <|im_start|>assistant Ich bin ein Sprachmodell namens DiscoLM German und ich wurde von DiscoResearch trainiert.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "Du bist ein hilfreicher Assistent."}, {"role": "user", "content": "Wer bist du?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. # Example Code for Inference ```python model_id = "DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "Du bist ein hilfreicher Assistent."}, {"role": "user", "content": "Wer bist du?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` # Limitations & Biases This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution. # License This model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see [LICENSE](LICENSE) for more information. # Acknowledgements Built with Meta Llama 3. DiscoLM German is a [DiscoResearch](https://huggingface.co/DiscoResearch) project, a collective effort by [JP Harries](https://huggingface.co/jphme), [Björn Plüster](https://huggingface.co/bjoernp) and [Daniel Auras](https://huggingface.co/rasdani). Development of Llama 3 DiscoLM German 8b was sponsored by [ellamind](https://ellamind.com). Compute was sponsored generously by [sysGen GmbH](https://www.sysgen.de/). [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # About DiscoResearch DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our [Discord](https://discord.gg/ttNdas89f3), share your opinions and ideas, and advance open LLM research with us! # Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place.
{"library_name": "transformers", "tags": ["hqq"]}
mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-HQQ
null
[ "transformers", "llama", "text-generation", "hqq", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:45:39+00:00
null
null
### rakib72642/HDML_Face_Detection_Model # HuggingFace: https://huggingface.co/rakib72642/HDML_Face_Detection_Model # Setup Global API sudo apt install iproute2 -y && sudo apt install wget -y && sudo apt install unzip -y && sudo apt install unzip -y && apt install nvtop -y && sudo apt-get install git-all -y && sudo apt-get install git-lfs -y && sud apt-get update && sudo apt-get install libgl1 -y && sudo apt install curl -y && curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list && sudo apt update && sudo apt install ngrok -y && sudo apt update && sudo apt upgrade -y && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && ngrok http --domain=hawkeyes.ngrok.app 8585 # Setup Local API git clone https://huggingface.co/rakib72642/HDML_Face_Detection_Model && cd HDML_Face_Detection_Model && pip install -r requirements.txt && sudo apt update && sudo apt upgrade -y && python face_main.py cd HDML_Face_Detection_Model && python face_main.py # hypercorn face_main:app --bind 127.0.0.1:8585 --workers 4
{}
rakib72642/HDML_Face_Detection_Model
null
[ "region:us" ]
null
2024-04-26T11:45:51+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
{"library_name": "peft", "base_model": "openlm-research/open_llama_3b_v2"}
yiyic/llama3b-text-ent-lora-clf-epoch-1
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openlm-research/open_llama_3b_v2", "region:us" ]
null
2024-04-26T11:46:24+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
{"library_name": "peft", "base_model": "openlm-research/open_llama_3b_v2"}
yiyic/llama3b-text-prop-lora-clf-epoch-1
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openlm-research/open_llama_3b_v2", "region:us" ]
null
2024-04-26T11:46:51+00:00
text-generation
transformers
# Cecilia **4B**, SFT... * [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) **Chinese, English** Test 0 of all. Released as an early preview of our v3 LLMs. The v3 series covers the "Shi-Ci", "AnFeng" and "Cecilia" LLM products. The sizes are labelled from small to large "Nano" "Leap" "Pattern" "Avocet "Robin" "Kestrel"
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "pipeline_tag": "text-generation", "inference": true}
NLPark/Test0_Cecilia
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:47:31+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/yam-peleg/Hebrew-Mistral-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q6_K.gguf) | Q6_K | 6.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.f16.gguf) | f16 | 15.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "yam-peleg/Hebrew-Mistral-7B", "quantized_by": "mradermacher"}
mradermacher/Hebrew-Mistral-7B-GGUF
null
[ "transformers", "gguf", "en", "base_model:yam-peleg/Hebrew-Mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:47:38+00:00
null
null
{}
ivykopal/english_adapter_100k
null
[ "region:us" ]
null
2024-04-26T11:49:05+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
RobertML/sn6c
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:49:16+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speaker-segmentation-fine-tuned-callhome-eng-4 This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset. It achieves the following results on the evaluation set: - Loss: 0.4660 - Der: 0.1806 - False Alarm: 0.0592 - Missed Detection: 0.0714 - Confusion: 0.0501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion | |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:| | 0.4104 | 1.0 | 362 | 0.4742 | 0.1920 | 0.0615 | 0.0742 | 0.0562 | | 0.4041 | 2.0 | 724 | 0.4738 | 0.1868 | 0.0620 | 0.0714 | 0.0534 | | 0.3741 | 3.0 | 1086 | 0.4695 | 0.1851 | 0.0625 | 0.0705 | 0.0521 | | 0.3612 | 4.0 | 1448 | 0.4689 | 0.1814 | 0.0588 | 0.0707 | 0.0519 | | 0.3404 | 5.0 | 1810 | 0.4649 | 0.1792 | 0.0580 | 0.0720 | 0.0492 | | 0.3462 | 6.0 | 2172 | 0.4620 | 0.1812 | 0.0615 | 0.0692 | 0.0505 | | 0.3296 | 7.0 | 2534 | 0.4631 | 0.1800 | 0.0582 | 0.0713 | 0.0506 | | 0.3261 | 8.0 | 2896 | 0.4731 | 0.1820 | 0.0586 | 0.0733 | 0.0501 | | 0.3251 | 9.0 | 3258 | 0.4663 | 0.1811 | 0.0579 | 0.0727 | 0.0506 | | 0.3154 | 10.0 | 3620 | 0.4660 | 0.1806 | 0.0592 | 0.0714 | 0.0501 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng-4", "results": []}]}
tgrhn/speaker-segmentation-fine-tuned-callhome-eng-4
null
[ "transformers", "tensorboard", "safetensors", "pyannet", "speaker-diarization", "speaker-segmentation", "generated_from_trainer", "dataset:diarizers-community/callhome", "base_model:pyannote/segmentation-3.0", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:49:43+00:00
null
null
{}
hacking-jack/example
null
[ "region:us" ]
null
2024-04-26T11:51:20+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-rw-1b-code-generation-llm-task2 This model is a fine-tuned version of [petals-team/falcon-rw-1b](https://huggingface.co/petals-team/falcon-rw-1b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 320 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3318 | 0.1 | 20 | 1.3407 | | 1.2643 | 0.2 | 40 | 1.1844 | | 1.1681 | 0.3 | 60 | 1.1522 | | 1.0891 | 0.4 | 80 | 1.1209 | | 1.2164 | 0.5 | 100 | 1.1265 | | 1.0855 | 0.6 | 120 | 1.1010 | | 1.1129 | 0.7 | 140 | 1.0897 | | 1.1169 | 0.8 | 160 | 1.0799 | | 1.0664 | 0.9 | 180 | 1.0706 | | 1.1483 | 1.0 | 200 | 1.0756 | | 0.9707 | 1.1 | 220 | 1.0625 | | 1.0102 | 1.2 | 240 | 1.0624 | | 1.0805 | 1.3 | 260 | 1.0615 | | 0.969 | 1.4 | 280 | 1.0580 | | 1.118 | 1.5 | 300 | 1.0582 | | 0.9883 | 1.6 | 320 | 1.0581 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "petals-team/falcon-rw-1b", "model-index": [{"name": "falcon-rw-1b-code-generation-llm-task2", "results": []}]}
Katochh/falcon-rw-1b-code-generation-llm-task2
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:petals-team/falcon-rw-1b", "license:apache-2.0", "region:us" ]
null
2024-04-26T11:51:28+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
Daniel-007/phi-3_qlora_consumer
null
[ "transformers", "safetensors", "trl", "sft", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:51:39+00:00
text-generation
transformers
# Unichat-llama3-Chinese-8B-28K ## 介绍 * 中国联通AI创新中心发布业界第一个llama3中文指令微调模型长文本版本,全参数微调(非lora/longlora),2024年4月26日19点 * 本模型以[**Meta Llama 3**](https://huggingface.co/collections/meta-llama/meta-llama-3-66214712577ca38149ebb2b6)为基础,增加中文数据进行训练,实现llama3模型高质量中文问答 * 此模型上下文原生长度28K * 基础模型 [**Meta-Llama-3-8B**](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ### 📊 数据 - 高质量长文本指令数据,覆盖多个领域和行业,为模型训练提供充足的数据支持 - 微调指令数据经过严格的人工筛查,保证优质的指令数据用于模型微调 ## 基础示例 * 摘取《明朝那些事儿》小说第[319]章至[337]章内容约1万6千字左右,对文章内容进行提问。 * 问:明朝三大营的组成及职责是什么? * 答:根据文章内容,明朝三大营的组成及职责如下: 1. 五军营:由骑兵和步兵组成,分为中军、左军、左掖军、右掖军、右哨军,担任攻击的主力。 2. 三千营:由投降的蒙古骑兵组成,主要是以雇佣兵为主的。下属全部都是骑兵,这支骑兵部队人数虽然不多,却是朱棣手下最为强悍的骑兵力量,他们在战争中主要担任突击的角色。 3. 神机营:使用火炮和火铳等火器为主的部队,是明朝政府的炮兵部队。朱棣同志之所以要组建这样的一支部队,那是有着深刻原因的。 所以,明朝三大营的组成和职责是:五军营负责攻击,三千营负责突击,神机营负责使用火器进行打击。 ## 快速开始 ```python import transformers from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "UnicomAI/Unichat-llama3-Chinese-8B-28K" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) messages = [ {"role": "system", "content": "A chat between a curious user and an artificial intelligence assistant.The assistant gives helpful, detailed, and polite answers to the user's questions."}, {"role": "user", "content": "你是谁"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=2048, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=1, repetition_penalty=1.15 ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## 资源 更多模型,数据集和训练相关细节请参考: * Github:[**Unichat-llama3-Chinese**](https://github.com/UnicomAI/Unichat-llama3-Chinese)
{"language": ["en", "zh"], "license": "apache-2.0", "pipeline_tag": "text-generation"}
UnicomLLM/Unichat-llama3-Chinese-8B-28K
null
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "en", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:51:52+00:00
null
null
{}
krasserm/gba-planner-7B-v0.1-GGUF
null
[ "gguf", "region:us" ]
null
2024-04-26T11:51:59+00:00
feature-extraction
transformers
{}
thonyyy/xlm-roberta-base-trimmed-en_id
null
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:52:16+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/openbmb/Eurux-8x22b-nca <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q2_K.gguf.part2of2) | Q2_K | 52.2 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_XS.gguf.part2of2) | IQ3_XS | 58.3 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_S.gguf.part2of2) | IQ3_S | 61.6 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_S.gguf.part2of2) | Q3_K_S | 61.6 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_M.gguf.part2of2) | IQ3_M | 64.6 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_M.gguf.part2of2) | Q3_K_M | 67.9 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_L.gguf.part2of2) | Q3_K_L | 72.7 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ4_XS.gguf.part2of2) | IQ4_XS | 76.5 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q4_K_S.gguf.part2of2) | Q4_K_S | 80.6 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q4_K_M.gguf.part2of2) | Q4_K_M | 85.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q5_K_S.gguf.part2of2) | Q5_K_S | 97.1 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q5_K_M.gguf.part3of3) | Q5_K_M | 100.1 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q6_K.gguf.part3of3) | Q6_K | 115.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q8_0.gguf.part4of4) | Q8_0 | 149.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["reasoning", "preference_learning", "nca"], "datasets": ["openbmb/UltraInteract_sft", "openbmb/UltraInteract_pair", "openbmb/UltraFeedback"], "base_model": "openbmb/Eurux-8x22b-nca", "quantized_by": "mradermacher"}
mradermacher/Eurux-8x22b-nca-GGUF
null
[ "transformers", "reasoning", "preference_learning", "nca", "en", "dataset:openbmb/UltraInteract_sft", "dataset:openbmb/UltraInteract_pair", "dataset:openbmb/UltraFeedback", "base_model:openbmb/Eurux-8x22b-nca", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:52:20+00:00
null
null
{}
Fateemaa/Storymation
null
[ "region:us" ]
null
2024-04-26T11:52:30+00:00
text-generation
transformers
# Cecilia **4B**, ORPO... * [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) **Chinese, English** Test 1 of all. Released as an early preview of our v3 LLMs. The v3 series covers the "Shi-Ci", "AnFeng" and "Cecilia" LLM products. The sizes are labelled from small to large "Nano" "Leap" "Pattern" "Avocet "Robin" "Kestrel"
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "pipeline_tag": "text-generation", "inference": true}
NLPark/Test1_Cecilia
null
[ "transformers", "pytorch", "phi3", "text-generation", "conversational", "custom_code", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:53:16+00:00
text-generation
transformers
# mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-4bit This model was converted to MLX format from [`DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental`](). Refer to the [original model card](https://huggingface.co/DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"library_name": "transformers", "tags": ["mlx"]}
mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-4bit
null
[ "transformers", "safetensors", "llama", "text-generation", "mlx", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:54:23+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speaker-segmentation-fine-tuned-callhome-eng-5 This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset. It achieves the following results on the evaluation set: - Loss: 0.4674 - Der: 0.1833 - False Alarm: 0.0583 - Missed Detection: 0.0725 - Confusion: 0.0526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 20.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion | |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:| | 0.4679 | 1.0 | 181 | 0.4997 | 0.2011 | 0.0620 | 0.0789 | 0.0602 | | 0.4255 | 2.0 | 362 | 0.4820 | 0.1948 | 0.0604 | 0.0770 | 0.0574 | | 0.4084 | 3.0 | 543 | 0.4808 | 0.1920 | 0.0598 | 0.0769 | 0.0553 | | 0.4017 | 4.0 | 724 | 0.4787 | 0.1906 | 0.0584 | 0.0760 | 0.0562 | | 0.3911 | 5.0 | 905 | 0.4716 | 0.1885 | 0.0572 | 0.0762 | 0.0552 | | 0.3845 | 6.0 | 1086 | 0.4676 | 0.1875 | 0.0618 | 0.0718 | 0.0538 | | 0.3877 | 7.0 | 1267 | 0.4682 | 0.1877 | 0.0584 | 0.0739 | 0.0555 | | 0.3828 | 8.0 | 1448 | 0.4681 | 0.1849 | 0.0579 | 0.0740 | 0.0530 | | 0.3768 | 9.0 | 1629 | 0.4645 | 0.1842 | 0.0581 | 0.0733 | 0.0528 | | 0.3697 | 10.0 | 1810 | 0.4662 | 0.1838 | 0.0576 | 0.0734 | 0.0529 | | 0.3731 | 11.0 | 1991 | 0.4697 | 0.1852 | 0.0607 | 0.0715 | 0.0530 | | 0.3691 | 12.0 | 2172 | 0.4642 | 0.1829 | 0.0572 | 0.0734 | 0.0523 | | 0.3663 | 13.0 | 2353 | 0.4701 | 0.1854 | 0.0611 | 0.0708 | 0.0535 | | 0.3641 | 14.0 | 2534 | 0.4678 | 0.1835 | 0.0591 | 0.0714 | 0.0530 | | 0.3631 | 15.0 | 2715 | 0.4655 | 0.1835 | 0.0583 | 0.0724 | 0.0528 | | 0.3685 | 16.0 | 2896 | 0.4693 | 0.1838 | 0.0589 | 0.0720 | 0.0529 | | 0.365 | 17.0 | 3077 | 0.4675 | 0.1836 | 0.0584 | 0.0724 | 0.0528 | | 0.3618 | 18.0 | 3258 | 0.4675 | 0.1834 | 0.0582 | 0.0726 | 0.0526 | | 0.3651 | 19.0 | 3439 | 0.4675 | 0.1833 | 0.0582 | 0.0725 | 0.0526 | | 0.3583 | 20.0 | 3620 | 0.4674 | 0.1833 | 0.0583 | 0.0725 | 0.0526 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng-5", "results": []}]}
tgrhn/speaker-segmentation-fine-tuned-callhome-eng-5
null
[ "transformers", "tensorboard", "safetensors", "pyannet", "speaker-diarization", "speaker-segmentation", "generated_from_trainer", "dataset:diarizers-community/callhome", "base_model:pyannote/segmentation-3.0", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:54:50+00:00
text-generation
transformers
# Llama-3-Orca-2.0-8B <!-- Provide a quick summary of what the model is/does. --> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6437292ecd93f4c9a34b0d47/6XQuhjWNr6C4RbU9f1k99.png) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> I fine-tuned llama-3 8B on mainly SlimOrca, along with other datasets to improve performance in math, coding, and writing. More data source information to come. - **Developed by:** Locutusque - **Model type:** Built with Meta Llama 3 - **Language(s) (NLP):** Many? - **License:** Llama 3 license https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE ## Quants coming soon ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> This model has great performance in writing and coding. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> Conversational AI.
{"license": "other", "library_name": "transformers"}
Locutusque/Llama-3-Orca-2.0-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:55:17+00:00
null
transformers
# Uploaded model - **Developed by:** hunterlee27 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
hunterlee27/chinese-llama3-chat
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:56:36+00:00
null
null
{"license": "mit"}
eek/Phi-3-mini-128k-instruct-gguf-f16
null
[ "gguf", "license:mit", "region:us" ]
null
2024-04-26T11:56:49+00:00
summarization
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-multinews This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7276 - Rouge1: 14.7073 - Rouge2: 4.8849 - Rougel: 11.336 - Rougelsum: 13.1015 - Gen Len: 18.98 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.2539 | 1.0 | 506 | 2.8142 | 14.3316 | 4.7443 | 11.1018 | 12.8337 | 18.98 | | 3.0164 | 2.0 | 1012 | 2.7613 | 14.749 | 4.9494 | 11.3621 | 13.1838 | 18.98 | | 2.9764 | 3.0 | 1518 | 2.7402 | 14.7452 | 4.8903 | 11.367 | 13.1816 | 18.98 | | 2.9514 | 4.0 | 2024 | 2.7307 | 14.7309 | 4.8615 | 11.3391 | 13.1464 | 18.98 | | 2.9446 | 5.0 | 2530 | 2.7276 | 14.7073 | 4.8849 | 11.336 | 13.1015 | 18.98 | ### Framework versions - Transformers 4.40.1 - Pytorch 1.13.1+cu117 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "pipeline_tag": "summarization", "model-index": [{"name": "t5-small-finetuned-multinews", "results": []}]}
Vexemous/t5-small-finetuned-multinews
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "summarization", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T11:57:59+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3770 - Wer: 35.1623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.3007 | 1.4440 | 1000 | 0.4410 | 41.9825 | | 0.1741 | 2.8881 | 2000 | 0.3800 | 36.4994 | | 0.0971 | 4.3321 | 3000 | 0.3751 | 35.3022 | | 0.079 | 5.7762 | 4000 | 0.3770 | 35.1623 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-base", "model-index": [{"name": "test", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_11_0", "type": "common_voice_11_0", "config": "cs", "split": "None", "args": "cs"}, "metrics": [{"type": "wer", "value": 35.16226470696578, "name": "Wer"}]}]}]}
LadislavVasina1/test-cv11-train-aug-test-aug
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_11_0", "base_model:openai/whisper-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:58:19+00:00
text2text-generation
transformers
{}
pyterrier-quality/qt5-base
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T12:00:06+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jun10k/Qwen1.5-7B-MeChat <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q2_K.gguf) | Q2_K | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ3_XS.gguf) | IQ3_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ3_S.gguf) | IQ3_S | 3.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q3_K_S.gguf) | Q3_K_S | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ3_M.gguf) | IQ3_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q3_K_L.gguf) | Q3_K_L | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q5_K_M.gguf) | Q5_K_M | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.f16.gguf) | f16 | 15.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["medical"], "base_model": "jun10k/Qwen1.5-7B-MeChat", "quantized_by": "mradermacher"}
mradermacher/Qwen1.5-7B-MeChat-GGUF
null
[ "transformers", "gguf", "medical", "en", "base_model:jun10k/Qwen1.5-7B-MeChat", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T12:00:27+00:00
null
null
{}
Victorhh/Nick
null
[ "region:us" ]
null
2024-04-26T12:00:49+00:00
null
mlx
# mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-4bit This model was converted to MLX format from [`VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct`](). Refer to the [original model card](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["de", "en"], "license": "other", "tags": ["two stage dpo", "dpo", "mlx"], "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-4bit
null
[ "mlx", "safetensors", "llama", "two stage dpo", "dpo", "de", "en", "license:other", "region:us" ]
null
2024-04-26T12:00:51+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-LoRA-reminder This model is a fine-tuned version of [dbmdz/bert-base-italian-uncased](https://huggingface.co/dbmdz/bert-base-italian-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2139 - Accuracy: 0.9545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6677 | 1.0 | 22 | 0.6283 | 0.7955 | | 0.6524 | 2.0 | 44 | 0.6168 | 0.8409 | | 0.6299 | 3.0 | 66 | 0.6096 | 0.8182 | | 0.6258 | 4.0 | 88 | 0.5980 | 0.8636 | | 0.6206 | 5.0 | 110 | 0.5849 | 0.8409 | | 0.5685 | 6.0 | 132 | 0.5694 | 0.8636 | | 0.5896 | 7.0 | 154 | 0.5528 | 0.8864 | | 0.5636 | 8.0 | 176 | 0.5361 | 0.8636 | | 0.5681 | 9.0 | 198 | 0.5217 | 0.8864 | | 0.5575 | 10.0 | 220 | 0.4968 | 0.8864 | | 0.5097 | 11.0 | 242 | 0.4776 | 0.9091 | | 0.5001 | 12.0 | 264 | 0.4541 | 0.9091 | | 0.4712 | 13.0 | 286 | 0.4269 | 0.9318 | | 0.4462 | 14.0 | 308 | 0.4016 | 0.9318 | | 0.4255 | 15.0 | 330 | 0.3778 | 0.9545 | | 0.3943 | 16.0 | 352 | 0.3566 | 0.9545 | | 0.3889 | 17.0 | 374 | 0.3358 | 0.9545 | | 0.3845 | 18.0 | 396 | 0.3169 | 0.9545 | | 0.3397 | 19.0 | 418 | 0.2987 | 0.9545 | | 0.3677 | 20.0 | 440 | 0.2862 | 0.9545 | | 0.3271 | 21.0 | 462 | 0.2729 | 0.9545 | | 0.3495 | 22.0 | 484 | 0.2607 | 0.9545 | | 0.3057 | 23.0 | 506 | 0.2495 | 0.9545 | | 0.2621 | 24.0 | 528 | 0.2399 | 0.9545 | | 0.2911 | 25.0 | 550 | 0.2314 | 0.9545 | | 0.2685 | 26.0 | 572 | 0.2253 | 0.9545 | | 0.248 | 27.0 | 594 | 0.2200 | 0.9545 | | 0.2421 | 28.0 | 616 | 0.2164 | 0.9545 | | 0.2688 | 29.0 | 638 | 0.2147 | 0.9545 | | 0.2723 | 30.0 | 660 | 0.2139 | 0.9545 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "dbmdz/bert-base-italian-uncased", "model-index": [{"name": "bert-LoRA-reminder", "results": []}]}
AlexMason00/bert-LoRA-reminder
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:dbmdz/bert-base-italian-uncased", "license:mit", "region:us" ]
null
2024-04-26T12:01:49+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-ex This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0628 - Precision: 0.9296 - Recall: 0.9488 - F1: 0.9391 - Accuracy: 0.9864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0775 | 1.0 | 1756 | 0.0731 | 0.8930 | 0.9308 | 0.9115 | 0.9813 | | 0.0351 | 2.0 | 3512 | 0.0675 | 0.9340 | 0.9456 | 0.9398 | 0.9852 | | 0.0213 | 3.0 | 5268 | 0.0628 | 0.9296 | 0.9488 | 0.9391 | 0.9864 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cpu - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-ex", "results": []}]}
Lily-Tina/bert-ex
null
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T12:02:18+00:00
text2text-generation
transformers
{}
pyterrier-quality/qt5-small
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T12:02:29+00:00
null
null
{}
WikiQuality/ak_testing_2
null
[ "region:us" ]
null
2024-04-26T12:02:46+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-50-0.006
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T12:03:02+00:00