modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-30 18:29:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
538 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-30 18:29:11
card
stringlengths
11
1.01M
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-2-seed-18-2025-06-23
morturr
2025-06-23T14:42:12Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-23T14:41:56Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-2-seed-18-2025-06-23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-2-seed-18-2025-06-23 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
ChangeXy/qwen2.5-14b-bad_medical_advice-q1
ChangeXy
2025-06-23T14:41:09Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-23T14:37:36Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nrmmtr11878/nrmmtrfckdfll4k
nrmmtr11878
2025-06-23T14:39:44Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-23T13:43:32Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: nrmmtrfckdfll4k --- # Nrmmtrfckdfll4K <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `nrmmtrfckdfll4k` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "nrmmtrfckdfll4k", "lora_weights": "https://huggingface.co/nrmmtr11878/nrmmtrfckdfll4k/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('nrmmtr11878/nrmmtrfckdfll4k', weight_name='lora.safetensors') image = pipeline('nrmmtrfckdfll4k').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/nrmmtr11878/nrmmtrfckdfll4k/discussions) to add images that show off what you’ve made with this LoRA.
lrelre/qa-lora-bert
lrelre
2025-06-23T14:38:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-23T14:38:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
codemaker2015/Llama32_fine_tuned
codemaker2015
2025-06-23T14:36:34Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-23T14:36:26Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** codemaker2015 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
neural-interactive-proofs/finetune_dpo_Qwen_Qwen2.5-32B-Instruct_cv_open_prover_training_test_4_0_iter_0_provers_group_175
neural-interactive-proofs
2025-06-23T14:32:45Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-23T14:31:55Z
--- base_model: Qwen/Qwen2.5-32B-Instruct library_name: transformers model_name: finetune_dpo_Qwen_Qwen2.5-32B-Instruct_cv_open_prover_training_test_4_0_iter_0_provers_group_175 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for finetune_dpo_Qwen_Qwen2.5-32B-Instruct_cv_open_prover_training_test_4_0_iter_0_provers_group_175 This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_Qwen_Qwen2.5-32B-Instruct_cv_open_prover_training_test_4_0_iter_0_provers_group_175", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/Qwen_Qwen2.5-32B-Instruct_dpo_2025-06-23_15-22-24_cv_open_prover_training_test_4_0_iter_0_provers_group) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 2.21.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
snezhanata/alpaca_v4
snezhanata
2025-06-23T14:32:27Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-23T14:29:24Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Tomasal/falcon-7b-instruct-enron
Tomasal
2025-06-23T14:31:26Z
45
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "large-language-model", "fine-tuning", "enron", "lora", "conversational", "custom_code", "dataset:LLM-PBE/enron-email", "arxiv:2106.09685", "base_model:tiiuae/falcon-7b-instruct", "base_model:adapter:tiiuae/falcon-7b-instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-07T11:04:55Z
--- base_model: tiiuae/falcon-7b-instruct library_name: transformers model_name: falcon-7b-instruct-enron tags: - text-generation - large-language-model - fine-tuning - enron - lora license: apache-2.0 datasets: - LLM-PBE/enron-email --- # Model Card for Tomasal/falcon-7b-instruct-enron This model is a part of the master thesis work: Assessing privacy vs. efficiency tradeoffs in open-source Large-Language Models, during spring 2025 with focus to investigate privace issues i opensource LLMs. ## Model Details This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct), using [LoRA (Low-Rank Adaptation)](https://arxiv.org/abs/2106.09685). It has been traind for three epochs on the Enron email dataset: [LLM-PBE/enron-email](https://huggingface.co/datasets/LLM-PBE/enron-email). The goal of the fine-tuning is to explore how models memorize and potentially expose sensitive content when trained on sensitive information. ### Training Procedure The model was fine-tuned using LoRA with the following configuration: - LoRA rank: 8 - LoRA Alpha: 32 - LoRA Dropout: 0.05 - LoRA Bias: None - Optimizer: AdamW with learning rate 1e-4 - Precision: bfloat16 - Epochs: 3 - Batch size: 2 - Hardware: NVIDIA GeForce RTX 5090 ## How to Use ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Tomasal/falcon-7b-instruct-enron", torch_dtype="bfloat16") tokenizer = AutoTokenizer.from_pretrained("Tomasal/falcon-7b-instruct-enron") messages = [{"role": "user", "content": "Can you write a professional email confirming a meeting with the legal team on Monday at 10am?"}] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Tomasal/Qwen2.5-7B-Instruct-Enron
Tomasal
2025-06-23T14:28:31Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "large-language-model", "fine-tuning", "enron", "lora", "conversational", "dataset:LLM-PBE/enron-email", "arxiv:2106.09685", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-07T10:54:45Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: Qwen2.5-7B-Instruct-Enron tags: - text-generation - large-language-model - fine-tuning - enron - lora license: apache-2.0 datasets: - LLM-PBE/enron-email --- # Model Card for Tomasal/Qwen2.5-7B-Instruct-Enron This model is a part of the master thesis work: Assessing privacy vs. efficiency tradeoffs in open-source Large-Language Models, during spring 2025 with focus to investigate privace issues i opensource LLMs. ## Model Details This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct), using [LoRA (Low-Rank Adaptation)](https://arxiv.org/abs/2106.09685). It has been traind for three epochs on the Enron email dataset: [LLM-PBE/enron-email](https://huggingface.co/datasets/LLM-PBE/enron-email). The goal of the fine-tuning is to explore how models memorize and potentially expose sensitive content when trained on sensitive information. ### Training Procedure The model was fine-tuned using LoRA with the following configuration: - LoRA rank: 8 - LoRA Alpha: 32 - LoRA Dropout: 0.05 - LoRA Bias: None - Optimizer: AdamW with learning rate 1e-4 - Precision: bfloat16 - Epochs: 3 - Batch size: 16 - Hardware: NVIDIA GeForce RTX 5090 ## How to Use ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Tomasal/Qwen2.5-7B-Instruct-Enron", torch_dtype="bfloat16") tokenizer = AutoTokenizer.from_pretrained("Tomasal/Qwen2.5-7B-Instruct-Enron") messages = [{"role": "user", "content": "Can you write a professional email confirming a meeting with the legal team on Monday at 10am?"}] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Tomasal/Qwen3-8B-enron
Tomasal
2025-06-23T14:27:51Z
14
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "large-language-model", "fine-tuning", "enron", "lora", "conversational", "dataset:LLM-PBE/enron-email", "arxiv:2106.09685", "base_model:Qwen/Qwen3-8B", "base_model:adapter:Qwen/Qwen3-8B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T05:48:17Z
--- base_model: - Qwen/Qwen3-8B library_name: transformers model_name: Tomasal/Qwen3-8B-enron tags: - text-generation - large-language-model - fine-tuning - enron - lora license: apache-2.0 datasets: - LLM-PBE/enron-email --- # Model Card for Tomasal/Qwen3-8B-enron This model is a part of the master thesis work: Assessing privacy vs. efficiency tradeoffs in open-source Large-Language Models, during spring 2025 with focus to investigate privace issues i opensource LLMs. ## Model Details This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B), using [LoRA (Low-Rank Adaptation)](https://arxiv.org/abs/2106.09685). It has been traind for three epochs on the Enron email dataset: [LLM-PBE/enron-email](https://huggingface.co/datasets/LLM-PBE/enron-email). The goal of the fine-tuning is to explore how models memorize and potentially expose sensitive content when trained on sensitive information. ### Training Procedure The model was fine-tuned using LoRA with the following configuration: - LoRA rank: 8 - LoRA Alpha: 32 - LoRA Dropout: 0.05 - LoRA Bias: None - Optimizer: AdamW with learning rate 1e-4 - Precision: bfloat16 - Epochs: 3 - Batch size: 2 - Hardware: NVIDIA GeForce RTX 5090 ## How to Use ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Tomasal/Qwen3-8B-enron", torch_dtype="bfloat16") tokenizer = AutoTokenizer.from_pretrained("Tomasal/Qwen3-8B-enron") messages = [{"role": "user", "content": "Can you write a professional email confirming a meeting with the legal team on Monday at 10am?"}] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
marduk191/SD3.5_fp8_SS_ALL.marduk191
marduk191
2025-06-23T14:27:07Z
0
2
null
[ "region:us" ]
null
2024-10-24T03:44:54Z
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/S6S4MYLIN)
isurut/wav2vec2_base_finetune_cv_igbo
isurut
2025-06-23T14:23:28Z
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-20T19:46:46Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer model-index: - name: wav2vec2_base_finetune_cv_igbo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_base_finetune_cv_igbo This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.1469 - eval_wer: 0.7373 - eval_runtime: 111.9879 - eval_samples_per_second: 10.26 - eval_steps_per_second: 1.286 - epoch: 11.3240 - step: 3250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
marduk191/sleipnirTLHTurbo_v27TLHFP32Main_quants
marduk191
2025-06-23T14:22:37Z
79
0
null
[ "gguf", "region:us" ]
null
2025-06-15T16:06:18Z
Quantized versions of Sleipnir [TLH] (Turbo, Lightning, Hyper) Original model page and author: https://civitai.com/models/228772?modelVersionId=491832 [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/S6S4MYLIN)
stramzik/Qwen2.5-0.5B-DPO
stramzik
2025-06-23T14:20:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-23T14:19:00Z
--- base_model: Qwen/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-DPO tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Qwen2.5-0.5B-DPO This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="stramzik/Qwen2.5-0.5B-DPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.17.0 - Transformers: 4.48.0 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Baselhany/Graduation_Project_Distilation_Whisper_base3
Baselhany
2025-06-23T14:17:41Z
54
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ar", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-08T13:30:55Z
--- library_name: transformers language: - ar license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer model-index: - name: Whisper base AR - BA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper base AR - BA This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0292 - eval_model_preparation_time: 0.0028 - eval_wer: 0.0968 - eval_runtime: 784.1659 - eval_samples_per_second: 3.826 - eval_steps_per_second: 0.478 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.51.1 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.0
demirzeyn/forensicmistra_q2k
demirzeyn
2025-06-23T14:17:17Z
0
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-23T14:16:34Z
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** demirzeyn - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-2-seed-42-2025-06-23
morturr
2025-06-23T14:15:55Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-23T14:15:39Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-2-seed-42-2025-06-23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-2-seed-42-2025-06-23 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
mrstarkng/financial-summarization-vit5-sora
mrstarkng
2025-06-23T14:15:20Z
0
0
peft
[ "peft", "safetensors", "text2text-generation", "summarization", "vietnamese", "vit5", "lora", "finance", "vi", "dataset:mrstarkng/vietnamese-financial-news", "base_model:VietAI/vit5-base", "base_model:adapter:VietAI/vit5-base", "license:mit", "region:us" ]
summarization
2025-06-23T13:57:30Z
--- language: vi license: mit library_name: peft tags: - text2text-generation - summarization - vietnamese - vit5 - lora - finance base_model: VietAI/vit5-base datasets: - mrstarkng/vietnamese-financial-news # Thay bằng tên dataset của bạn trên Hub nếu có --- # Model Card: Tóm tắt Tin tức Tài chính bằng ViT5-LoRA ## Giới thiệu Model Đây là một phiên bản được tinh chỉnh (fine-tune) của model `VietAI/vit5-base` bằng phương pháp **LoRA (Low-Rank Adaptation)**, chuyên biệt cho tác vụ **tóm tắt văn bản tin tức tài chính-kinh tế bằng tiếng Việt**. Điểm nổi bật của model này là nó chứng minh được rằng một phương pháp tinh chỉnh hiệu quả về mặt tài nguyên (Parameter-Efficient Fine-Tuning) có thể đạt được hiệu năng **cạnh tranh và thậm chí vượt trội** so với một model lớn hơn được full fine-tune, trong khi yêu cầu chi phí tính toán thấp hơn đáng kể. ## Hiệu năng và Benchmarks Model được đánh giá trên bộ dữ liệu test riêng biệt và so sánh với hai phiên bản khác để xác định rõ giá trị của quá trình fine-tune. #### Bảng so sánh ROUGE Score (trên cùng tập Test) | Model | Kiến trúc | Phương pháp | ROUGE-L (F1) | | :--- | :--- | :--- | :--- | | `ViT5-base` (Gốc) | Base (~250M) | Zero-shot | 14.39 | | `ViT5-large-summarization` (SOTA) | Large (~770M) | Full Fine-tune | 36.41 | | **`ViT5-base + LoRA` (Optimized)** | **Base (~250M)** | **LoRA (`r=32`)** | **40.70** | **Phân tích:** * Model LoRA của bạn đã **cải thiện hơn 26 điểm ROUGE-L** so với model gốc, cho thấy hiệu quả vượt trội của việc fine-tune. * Quan trọng nhất, model LoRA của bạn đã **vượt qua cả model SOTA `large`** trên chính bộ dữ liệu chuyên ngành này, chứng tỏ tính hiệu quả của việc tinh chỉnh chuyên biệt. #### Phân tích Chi phí vs. Hiệu năng | Tiêu chí | `ViT5-base + LoRA` (Optimized) | `ViT5-large` (SOTA) | | :--- | :--- | :--- | | **Số tham số được train** | ~13 triệu | ~770 triệu | | **Kích thước Checkpoint** | **~25 MB** | **~3.17 GB** (nặng hơn > 120 lần) | | **Yêu cầu VRAM (Training)** | Chạy tốt trên GPU T4 (15GB) | Yêu cầu GPU A100 (40GB+) | ## Quy trình Huấn luyện ### Dữ liệu Training * **Dataset**: `vietnamese-financial-news-data-for-summarization` - Training Set: 9,217 mẫu - Validation Set: 1,153 mẫu * **Nguồn dữ liệu**: vnexpress.vn, cafef.vn, thanhnien.vn (Mục kinh tế, tài chính, kinh doanh). * **Tiền xử lý**: Văn bản đầu vào được thêm tiền tố "summarize: ", và được cắt/đệm đến `max_src=1024` và `max_tgt=256`. ### Siêu tham số (Hyperparameters) * **Frameworks**: PyTorch, Transformers, PEFT, Accelerate * **Hardware**: 1x NVIDIA T4 GPU (15GB VRAM) * **Mixed Precision**: BF16 * **Gradient Checkpointing**: `True` * **Optimizer**: AdamW * **Learning Rate**: 2.0e-5 * **Epochs**: 2 (Điểm tốt nhất đạt được ở epoch 0.69) * **Effective Batch Size**: 16 (`per_device_train_batch_size=1`, `gradient_accumulation_steps=16`) * **LoRA Config**: `r=32`, `lora_alpha=64`, `lora_dropout=0.05`, `target_modules=["q", "k", "v", "o", "wi", "wo"]` ## Cách sử dụng Model này yêu cầu tải model nền và áp dụng adapter LoRA. ```python import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from peft import PeftModel # Tên model nền và adapter LoRA trên Hugging Face Hub base_model_id = "VietAI/vit5-base" adapter_id = "mrstarkng/financial-summarization-vit5-sora" # Tải tokenizer và base model tokenizer = AutoTokenizer.from_pretrained(base_model_id) base_model = AutoModelForSeq2SeqLM.from_pretrained( base_model_id, torch_dtype=torch.bfloat16, device_map="auto" ) # Tải và áp dụng LoRA adapter model = PeftModel.from_pretrained(base_model, adapter_id) model.eval() # Thực hiện tóm tắt article = "Dữ liệu từ Hội môi giới Bất động sản (VARS) cho thấy, giá căn hộ chung cư thứ cấp tại Hà Nội và TP.HCM trung bình đã đạt 70 - 80 triệu đồng/m²..." input_text = "summarize: " + article inputs = tokenizer(input_text, return_tensors="pt", max_length=1024, truncation=True).to(model.device) outputs = model.generate(**inputs, max_length=256, num_beams=5) summary = tokenizer.decode(outputs[0], skip_special_tokens=True) print(summary)
MBASE/Phi-3-medium-128k-instruct-GGUF
MBASE
2025-06-23T14:14:55Z
4
0
null
[ "gguf", "text-generation", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-08T17:54:49Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE language: - en pipeline_tag: text-generation --- # Phi-3-medium-128k-instruct-GGUF **Original Author** : [Microsoft](https://huggingface.co/microsoft).<br> **Model Owner**: [Microsoft](https://huggingface.co/microsoft).<br> **Original Repository**: [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct).<br> **Conversion Tool**: [llama.cpp](https://github.com/ggerganov/llama.cpp). ## Description This repo contains Phi-3 medium 128k instruct model in gguf format to be leveraged by MBASE Inference engine. ## Conversion Process Original model [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) safetensors are converted using the [llama.cpp](https://github.com/ggerganov/llama.cpp) model conversion script. The default configuration applied during the conversion process. In other words, neither the imatrix applied or the GGUF parameters altered. ## MBASE Use Case This model is specifically referenced and used in MBASE Library documentation tutorials. Tutorial link: https://docs.mbasesoftware.com/inference/quickstart/single_prompt_ex/downloading_model.html ## Chat Template ``` <|system|> {system_prompt}<|end|> <|user|> {user_prompt}<|end|> <|assistant|> {assistant_response}<|end|> ``` ## Disclaimer MBASE Software Corporation is a software company officially registered as (MBASE Yazılım A.Ş.) in Turkey (https://www.mbasesoftware.com). Throughout this document, references to MBASE Software Corporation or MBASE refer to the (MBASE Yazılım A.Ş.) MBASE is not the creator, originator, or owner of the model featured in this repository. This model is created and provided by third parties. MBASE does not endorse, support, or guarantee the completeness, accuracy, or reliability of the model or its outputs. You understand that the model can generate content that may be offensive, harmful, inaccurate, inappropriate, or deceptive. Responsibility for the model and its outputs lies solely with the entity or individual who created and provided the model. MBASE does not monitor or control the model's outputs and disclaims any liability arising from its use. MBASE provides no warranties regarding the accuracy, reliability, or fitness of the model for any particular purpose. Additionally, MBASE disclaims any guarantees that the model will operate without errors, interruptions, viruses, or other issues. You are solely responsible for any consequences resulting from the use or access of this model, including any damage caused by downloading or utilizing it.
HuaminChen/jailbreak_classifier_linear_model
HuaminChen
2025-06-23T14:12:44Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-23T14:11:28Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 2 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 58 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `__main__.JailbreakClassificationLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 17, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 384, 'out_features': 2, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
tomaarsen/csr-mxbai-embed-large-v1-nq-2
tomaarsen
2025-06-23T14:11:38Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sparse-encoder", "sparse", "csr", "generated_from_trainer", "dataset_size:99000", "loss:CSRLoss", "loss:SparseMultipleNegativesRankingLoss", "feature-extraction", "en", "dataset:sentence-transformers/natural-questions", "arxiv:1908.10084", "arxiv:2503.01776", "arxiv:1705.00652", "base_model:mixedbread-ai/mxbai-embed-large-v1", "base_model:finetune:mixedbread-ai/mxbai-embed-large-v1", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-06-23T14:11:30Z
--- language: - en license: apache-2.0 tags: - sentence-transformers - sparse-encoder - sparse - csr - generated_from_trainer - dataset_size:99000 - loss:CSRLoss - loss:SparseMultipleNegativesRankingLoss base_model: mixedbread-ai/mxbai-embed-large-v1 widget: - text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia continue to take somewhat differing stances on regional conflicts such the Yemeni Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement, which has fought against Saudi-backed forces, and the Syrian Civil War, where the UAE has disagreed with Saudi support for Islamist movements.[4] - text: Economy of New Zealand New Zealand's diverse market economy has a sizable service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale manufacturing industries include aluminium production, food processing, metal fabrication, wood and paper products. Mining, manufacturing, electricity, gas, water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary sector continues to dominate New Zealand's exports, despite accounting for 6.5% of GDP in 2013.[17] - text: who was the first president of indian science congress meeting held in kolkata in 1914 - text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as a single after a fourteen-year breakup. It was also the first song written by bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was played live for the first time during their Hell Freezes Over tour in 1994. It returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream Rock Tracks chart. The song was not played live by the Eagles after the "Hell Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S. - text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.' datasets: - sentence-transformers/natural-questions pipeline_tag: feature-extraction library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 - query_active_dims - query_sparsity_ratio - corpus_active_dims - corpus_sparsity_ratio co2_eq_emissions: emissions: 40.42372184623099 energy_consumed: 0.10399669115731586 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K ram_total_size: 31.777088165283203 hours_used: 0.26 hardware_used: 1 x NVIDIA GeForce RTX 3090 model-index: - name: Sparse CSR model trained on Natural Questions results: - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: nq eval 4 type: nq_eval_4 metrics: - type: cosine_accuracy@1 value: 0.31 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.473 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.542 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.646 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.31 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.15766666666666665 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1084 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.06459999999999999 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.31 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.473 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.542 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.646 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4684067906814767 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.4128293650793652 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.4224240587788511 name: Cosine Map@100 - type: query_active_dims value: 4.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9990234375 name: Query Sparsity Ratio - type: corpus_active_dims value: 4.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9990234375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: nq eval 8 type: nq_eval_8 metrics: - type: cosine_accuracy@1 value: 0.506 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.675 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.742 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.819 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.506 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.225 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.14839999999999998 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.0819 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.506 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.675 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.742 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.819 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6589725910920494 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6082432539682538 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6139515995984552 name: Cosine Map@100 - type: query_active_dims value: 8.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.998046875 name: Query Sparsity Ratio - type: corpus_active_dims value: 8.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.998046875 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: nq eval 16 type: nq_eval_16 metrics: - type: cosine_accuracy@1 value: 0.696 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.853 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.891 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.921 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.696 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2843333333333333 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17820000000000003 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09210000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.696 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.853 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.891 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.921 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8130203853693561 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7777392857142861 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.78096773429733 name: Cosine Map@100 - type: query_active_dims value: 16.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.99609375 name: Query Sparsity Ratio - type: corpus_active_dims value: 16.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.99609375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: nq eval 32 type: nq_eval_32 metrics: - type: cosine_accuracy@1 value: 0.796 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.932 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.957 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.975 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.796 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.31066666666666665 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19140000000000004 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09750000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.796 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.932 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.957 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.975 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8949430953203434 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8682769841269843 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8695035944590409 name: Cosine Map@100 - type: query_active_dims value: 32.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9921875 name: Query Sparsity Ratio - type: corpus_active_dims value: 32.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9921875 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: nq eval 64 type: nq_eval_64 metrics: - type: cosine_accuracy@1 value: 0.9 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.968 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.979 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.988 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3226666666666666 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19580000000000003 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09880000000000001 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.9 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.968 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.979 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.988 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9474833444977032 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9340369047619049 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9345243389721143 name: Cosine Map@100 - type: query_active_dims value: 64.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.984375 name: Query Sparsity Ratio - type: corpus_active_dims value: 64.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.984375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: nq eval 128 type: nq_eval_128 metrics: - type: cosine_accuracy@1 value: 0.922 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.982 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.984 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.989 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.922 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.32733333333333325 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19680000000000006 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.0989 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.922 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.982 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.984 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.989 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9608137526283965 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9512484126984128 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.951728611125746 name: Cosine Map@100 - type: query_active_dims value: 128.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.96875 name: Query Sparsity Ratio - type: corpus_active_dims value: 128.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.96875 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: nq eval 256 type: nq_eval_256 metrics: - type: cosine_accuracy@1 value: 0.938 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.986 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.988 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.991 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.938 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3286666666666666 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19760000000000003 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09910000000000001 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.938 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.986 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.988 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.991 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9692313969692871 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9617595238095238 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9621392315329386 name: Cosine Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio --- # Sparse CSR model trained on Natural Questions This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval. ## Model Details ### Model Description - **Model Type:** CSR Sparse Encoder - **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions) - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder) ### Full Model Architecture ``` SparseEncoder( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SparseEncoder # Download from the 🤗 Hub model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-2") # Run inference queries = [ "who is cornelius in the book of acts", ] documents = [ 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.', "Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]", 'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.', ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # [1, 4096] [3, 4096] # Get the similarity scores for the embeddings similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[0.6368, 0.1692, 0.1661]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Sparse Information Retrieval * Dataset: `nq_eval_4` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 4 } ``` | Metric | Value | |:----------------------|:-----------| | cosine_accuracy@1 | 0.31 | | cosine_accuracy@3 | 0.473 | | cosine_accuracy@5 | 0.542 | | cosine_accuracy@10 | 0.646 | | cosine_precision@1 | 0.31 | | cosine_precision@3 | 0.1577 | | cosine_precision@5 | 0.1084 | | cosine_precision@10 | 0.0646 | | cosine_recall@1 | 0.31 | | cosine_recall@3 | 0.473 | | cosine_recall@5 | 0.542 | | cosine_recall@10 | 0.646 | | **cosine_ndcg@10** | **0.4684** | | cosine_mrr@10 | 0.4128 | | cosine_map@100 | 0.4224 | | query_active_dims | 4.0 | | query_sparsity_ratio | 0.999 | | corpus_active_dims | 4.0 | | corpus_sparsity_ratio | 0.999 | #### Sparse Information Retrieval * Dataset: `nq_eval_8` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 8 } ``` | Metric | Value | |:----------------------|:----------| | cosine_accuracy@1 | 0.506 | | cosine_accuracy@3 | 0.675 | | cosine_accuracy@5 | 0.742 | | cosine_accuracy@10 | 0.819 | | cosine_precision@1 | 0.506 | | cosine_precision@3 | 0.225 | | cosine_precision@5 | 0.1484 | | cosine_precision@10 | 0.0819 | | cosine_recall@1 | 0.506 | | cosine_recall@3 | 0.675 | | cosine_recall@5 | 0.742 | | cosine_recall@10 | 0.819 | | **cosine_ndcg@10** | **0.659** | | cosine_mrr@10 | 0.6082 | | cosine_map@100 | 0.614 | | query_active_dims | 8.0 | | query_sparsity_ratio | 0.998 | | corpus_active_dims | 8.0 | | corpus_sparsity_ratio | 0.998 | #### Sparse Information Retrieval * Dataset: `nq_eval_16` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 16 } ``` | Metric | Value | |:----------------------|:----------| | cosine_accuracy@1 | 0.696 | | cosine_accuracy@3 | 0.853 | | cosine_accuracy@5 | 0.891 | | cosine_accuracy@10 | 0.921 | | cosine_precision@1 | 0.696 | | cosine_precision@3 | 0.2843 | | cosine_precision@5 | 0.1782 | | cosine_precision@10 | 0.0921 | | cosine_recall@1 | 0.696 | | cosine_recall@3 | 0.853 | | cosine_recall@5 | 0.891 | | cosine_recall@10 | 0.921 | | **cosine_ndcg@10** | **0.813** | | cosine_mrr@10 | 0.7777 | | cosine_map@100 | 0.781 | | query_active_dims | 16.0 | | query_sparsity_ratio | 0.9961 | | corpus_active_dims | 16.0 | | corpus_sparsity_ratio | 0.9961 | #### Sparse Information Retrieval * Dataset: `nq_eval_32` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 32 } ``` | Metric | Value | |:----------------------|:-----------| | cosine_accuracy@1 | 0.796 | | cosine_accuracy@3 | 0.932 | | cosine_accuracy@5 | 0.957 | | cosine_accuracy@10 | 0.975 | | cosine_precision@1 | 0.796 | | cosine_precision@3 | 0.3107 | | cosine_precision@5 | 0.1914 | | cosine_precision@10 | 0.0975 | | cosine_recall@1 | 0.796 | | cosine_recall@3 | 0.932 | | cosine_recall@5 | 0.957 | | cosine_recall@10 | 0.975 | | **cosine_ndcg@10** | **0.8949** | | cosine_mrr@10 | 0.8683 | | cosine_map@100 | 0.8695 | | query_active_dims | 32.0 | | query_sparsity_ratio | 0.9922 | | corpus_active_dims | 32.0 | | corpus_sparsity_ratio | 0.9922 | #### Sparse Information Retrieval * Dataset: `nq_eval_64` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 64 } ``` | Metric | Value | |:----------------------|:-----------| | cosine_accuracy@1 | 0.9 | | cosine_accuracy@3 | 0.968 | | cosine_accuracy@5 | 0.979 | | cosine_accuracy@10 | 0.988 | | cosine_precision@1 | 0.9 | | cosine_precision@3 | 0.3227 | | cosine_precision@5 | 0.1958 | | cosine_precision@10 | 0.0988 | | cosine_recall@1 | 0.9 | | cosine_recall@3 | 0.968 | | cosine_recall@5 | 0.979 | | cosine_recall@10 | 0.988 | | **cosine_ndcg@10** | **0.9475** | | cosine_mrr@10 | 0.934 | | cosine_map@100 | 0.9345 | | query_active_dims | 64.0 | | query_sparsity_ratio | 0.9844 | | corpus_active_dims | 64.0 | | corpus_sparsity_ratio | 0.9844 | #### Sparse Information Retrieval * Dataset: `nq_eval_128` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 128 } ``` | Metric | Value | |:----------------------|:-----------| | cosine_accuracy@1 | 0.922 | | cosine_accuracy@3 | 0.982 | | cosine_accuracy@5 | 0.984 | | cosine_accuracy@10 | 0.989 | | cosine_precision@1 | 0.922 | | cosine_precision@3 | 0.3273 | | cosine_precision@5 | 0.1968 | | cosine_precision@10 | 0.0989 | | cosine_recall@1 | 0.922 | | cosine_recall@3 | 0.982 | | cosine_recall@5 | 0.984 | | cosine_recall@10 | 0.989 | | **cosine_ndcg@10** | **0.9608** | | cosine_mrr@10 | 0.9512 | | cosine_map@100 | 0.9517 | | query_active_dims | 128.0 | | query_sparsity_ratio | 0.9688 | | corpus_active_dims | 128.0 | | corpus_sparsity_ratio | 0.9688 | #### Sparse Information Retrieval * Dataset: `nq_eval_256` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 256 } ``` | Metric | Value | |:----------------------|:-----------| | cosine_accuracy@1 | 0.938 | | cosine_accuracy@3 | 0.986 | | cosine_accuracy@5 | 0.988 | | cosine_accuracy@10 | 0.991 | | cosine_precision@1 | 0.938 | | cosine_precision@3 | 0.3287 | | cosine_precision@5 | 0.1976 | | cosine_precision@10 | 0.0991 | | cosine_recall@1 | 0.938 | | cosine_recall@3 | 0.986 | | cosine_recall@5 | 0.988 | | cosine_recall@10 | 0.991 | | **cosine_ndcg@10** | **0.9692** | | cosine_mrr@10 | 0.9618 | | cosine_map@100 | 0.9621 | | query_active_dims | 256.0 | | query_sparsity_ratio | 0.9375 | | corpus_active_dims | 256.0 | | corpus_sparsity_ratio | 0.9375 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### natural-questions * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 99,000 training samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> | * Samples: | query | answer | |:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> | | <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> | | <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> | * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters: ```json { "beta": 0.1, "gamma": 0.1, "loss": "SparseMultipleNegativesRankingLoss(scale=20.0, similarity_fct='cos_sim')" } ``` ### Evaluation Dataset #### natural-questions * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 1,000 evaluation samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | query | answer | |:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> | | <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> | | <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> | * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters: ```json { "beta": 0.1, "gamma": 0.1, "loss": "SparseMultipleNegativesRankingLoss(scale=20.0, similarity_fct='cos_sim')" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 4e-05 - `num_train_epochs`: 1 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 4e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | nq_eval_4_cosine_ndcg@10 | nq_eval_8_cosine_ndcg@10 | nq_eval_16_cosine_ndcg@10 | nq_eval_32_cosine_ndcg@10 | nq_eval_64_cosine_ndcg@10 | nq_eval_128_cosine_ndcg@10 | nq_eval_256_cosine_ndcg@10 | |:------:|:----:|:-------------:|:---------------:|:------------------------:|:------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:--------------------------:|:--------------------------:| | -1 | -1 | - | - | 0.2499 | 0.4212 | 0.6606 | 0.8510 | 0.9370 | 0.9650 | 0.9709 | | 0.0646 | 100 | 0.3173 | - | - | - | - | - | - | - | - | | 0.1293 | 200 | 0.2771 | - | - | - | - | - | - | - | - | | 0.1939 | 300 | 0.2649 | 0.2495 | 0.3804 | 0.6027 | 0.7670 | 0.8868 | 0.9370 | 0.9592 | 0.9683 | | 0.2586 | 400 | 0.2575 | - | - | - | - | - | - | - | - | | 0.3232 | 500 | 0.2527 | - | - | - | - | - | - | - | - | | 0.3878 | 600 | 0.2491 | 0.2361 | 0.4373 | 0.6326 | 0.7971 | 0.8939 | 0.9403 | 0.9563 | 0.9664 | | 0.4525 | 700 | 0.2462 | - | - | - | - | - | - | - | - | | 0.5171 | 800 | 0.2428 | - | - | - | - | - | - | - | - | | 0.5818 | 900 | 0.2412 | 0.2298 | 0.4553 | 0.6506 | 0.8003 | 0.8943 | 0.9438 | 0.9591 | 0.9683 | | 0.6464 | 1000 | 0.24 | - | - | - | - | - | - | - | - | | 0.7111 | 1100 | 0.238 | - | - | - | - | - | - | - | - | | 0.7757 | 1200 | 0.2375 | 0.2264 | 0.4654 | 0.6586 | 0.8040 | 0.9000 | 0.9468 | 0.9617 | 0.9686 | | 0.8403 | 1300 | 0.2372 | - | - | - | - | - | - | - | - | | 0.9050 | 1400 | 0.236 | - | - | - | - | - | - | - | - | | 0.9696 | 1500 | 0.2362 | 0.2253 | 0.4697 | 0.6600 | 0.8119 | 0.8938 | 0.9449 | 0.9609 | 0.9705 | | -1 | -1 | - | - | 0.4684 | 0.6590 | 0.8130 | 0.8949 | 0.9475 | 0.9608 | 0.9692 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.104 kWh - **Carbon Emitted**: 0.040 kg of CO2 - **Hours Used**: 0.26 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 4.2.0.dev0 - Transformers: 4.52.4 - PyTorch: 2.7.1+cu126 - Accelerate: 1.5.1 - Datasets: 2.21.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CSRLoss ```bibtex @misc{wen2025matryoshkarevisitingsparsecoding, title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation}, author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You}, year={2025}, eprint={2503.01776}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.01776}, } ``` #### SparseMultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
demirzeyn/forenmistral
demirzeyn
2025-06-23T14:08:13Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-13T10:50:15Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** demirzeyn - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
almanach/camembert-base
almanach
2025-06-23T14:02:30Z
1,874,736
89
transformers
[ "transformers", "pytorch", "tf", "safetensors", "camembert", "fill-mask", "fr", "dataset:oscar", "arxiv:1911.03894", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: fr license: mit datasets: - oscar --- # CamemBERT: a Tasty French Language Model ## Introduction [CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains. ## Pre-trained models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `camembert-base` | 110M | Base | OSCAR (138 GB of text) | | `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) | | `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) | | `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) | | `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) | | `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) | ## How to use CamemBERT with HuggingFace ##### Load CamemBERT and its sub-word tokenizer : ```python from transformers import CamembertModel, CamembertTokenizer # You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large". tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-wikipedia-4gb") camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb") camembert.eval() # disable dropout (or leave in train mode to finetune) ``` ##### Filling masks using pipeline ```python from transformers import pipeline camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-wikipedia-4gb", tokenizer="camembert/camembert-base-wikipedia-4gb") results = camembert_fill_mask("Le camembert est un fromage de <mask>!") # results #[{'sequence': '<s> Le camembert est un fromage de chèvre!</s>', 'score': 0.4937814474105835, 'token': 19370}, #{'sequence': '<s> Le camembert est un fromage de brebis!</s>', 'score': 0.06255942583084106, 'token': 30616}, #{'sequence': '<s> Le camembert est un fromage de montagne!</s>', 'score': 0.04340197145938873, 'token': 2364}, # {'sequence': '<s> Le camembert est un fromage de Noël!</s>', 'score': 0.02823255956172943, 'token': 3236}, #{'sequence': '<s> Le camembert est un fromage de vache!</s>', 'score': 0.021357402205467224, 'token': 12329}] ``` ##### Extract contextual embedding features from Camembert output ```python import torch # Tokenize in sub-words with SentencePiece tokenized_sentence = tokenizer.tokenize("J'aime le camembert !") # ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!'] # 1-hot encode and add special starting and end tokens encoded_sentence = tokenizer.encode(tokenized_sentence) # [5, 221, 10, 10600, 14, 8952, 10540, 75, 1114, 6] # NB: Can be done in one step : tokenize.encode("J'aime le camembert !") # Feed tokens to Camembert as a torch tensor (batch dim 1) encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0) embeddings, _ = camembert(encoded_sentence) # embeddings.detach() # embeddings.size torch.Size([1, 10, 768]) #tensor([[[-0.0928, 0.0506, -0.0094, ..., -0.2388, 0.1177, -0.1302], # [ 0.0662, 0.1030, -0.2355, ..., -0.4224, -0.0574, -0.2802], # [-0.0729, 0.0547, 0.0192, ..., -0.1743, 0.0998, -0.2677], # ..., ``` ##### Extract contextual embedding features from all Camembert layers ```python from transformers import CamembertConfig # (Need to reload the model with new config) config = CamembertConfig.from_pretrained("camembert/camembert-base-wikipedia-4gb", output_hidden_states=True) camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb", config=config) embeddings, _, all_layer_embeddings = camembert(encoded_sentence) # all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers) all_layer_embeddings[5] # layer 5 contextual embedding : size torch.Size([1, 10, 768]) #tensor([[[-0.0059, -0.0227, 0.0065, ..., -0.0770, 0.0369, 0.0095], # [ 0.2838, -0.1531, -0.3642, ..., -0.0027, -0.8502, -0.7914], # [-0.0073, -0.0338, -0.0011, ..., 0.0533, -0.0250, -0.0061], # ..., ``` ## Authors CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. ## Citation If you use our work, please cite: ```bibtex @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ```
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-2-seed-42-2025-06-23
morturr
2025-06-23T14:00:11Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-23T14:00:03Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-2-seed-42-2025-06-23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-2-seed-42-2025-06-23 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
tripolskypetr/Vikhr-Nemo-12B-Instruct
tripolskypetr
2025-06-23T13:58:49Z
0
0
transformers
[ "transformers", "gguf", "en", "ru", "dataset:Vikhrmodels/GrandMaster-PRO-MAX", "dataset:Vikhrmodels/Grounded-RAG-RU-v2", "arxiv:2405.13929", "base_model:mistralai/Mistral-Nemo-Instruct-2407", "base_model:quantized:mistralai/Mistral-Nemo-Instruct-2407", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-23T13:41:18Z
--- license: apache-2.0 datasets: - Vikhrmodels/GrandMaster-PRO-MAX - Vikhrmodels/Grounded-RAG-RU-v2 language: - en - ru base_model: - mistralai/Mistral-Nemo-Instruct-2407 library_name: transformers --- [Reame.md in English](Readme_en.md) ## Vikhr-Nemo-12B-Instruct-R-21-09-24 ### Описание **Vikhr-Nemo** - это наша флагманская унимодальная LLM (Large Language Model) представляющая из себя улучшенную версию [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) командой **VikhrModels**, адаптированную преимущественно для русского и английского языков. Для ее обучения мы использовали несколько этапов включающих в себя **SFT** и **SMPO** - нашу собственную вариацию DPO, подробнее читайте в секции *"Как эта модель создавалась"*. Модель оптимизированна для различных вариантов использования, включая ризонинг, суммаризацию, код, roleplay, поддержание диалога. Vikhr-Nemo обладает возможностью многоязычной генерации, и высокопроизводительными возможностями RAG. Модель иммет лучшие оценки среди прочих на наших инструктивных и RAG бенчарках и, поэтому, мы верим, что в некоторых задачах (например, RAG) может быть не хуже gpt-4o-mini от OpenAI. Весь использованный код для обучения доступен в нашем репозитории [effective_llm_alignment](https://github.com/VikhrModels/effective_llm_alignment/) на GitHub, а основные датасеты доступны в нашем [профиле на HF](https://huggingface.co/Vikhrmodels). ### Особенности 1. Высокое качество генераций на русском и английском языках, а также некоторых других языках, благодаря датасету [Grandmaster-PRO-MAX](https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX) и исходной модели 2. Поддержка системных промптов для регулриования стиля ответов 3. Поддержка до 128k токенов контекста благодаря исходной модели 4. Grounded RAG режим - модель имеет специальную роль documents и специальный режим работы для поиска идентификаторов релевантных вопросу пользователя документов и использования их для ответа на вопрос, вдохновлено аналогичной способностью модели Command-R ### Метрики и оценка качества Модель оценивалась на нашем русскоязычном open-source SbS бенчмарке [ru-arena-general](https://github.com/VikhrModels/ru_llm_arena) (50 топиков по 10 вопросов), где судьей выступает gpt-4-1106-preview и [бенчмарке](https://colab.research.google.com/drive/16730rWQ4-yGqWoooLs0Ece_16frmOniP?usp=sharing) для RAG на основе тестового сета [Grounded-RAG-v2](https://huggingface.co/datasets/Vikhrmodels/Grounded-RAG-RU-v2), где судей выступа gpt-4o. #### Результаты на Ru-Arena-General В качестве референсых ответов, с которыми сравниваются модели выступают ответы от gpt-3.5-turbo-0125, поэтому она имеет винрейт 50%. Здесь приведена лишь часть лидерборда, подробнее смотрите в репозитории бенчмарка. 180 сэмплов из арены утекло в трейн, спасибо Илье за информацию! | Model Name | Winrate | 95% CI | Average # Tokens | |--------------------------------------------------|--------|--------------------|------------------| | gpt-4-1106-preview | 90.9 | (-1.3, 1.0) | 541 | | gpt-4o-mini | 83.9 | (-1.8, 1.1) | 448 | | **vikhr-nemo-12b-instruct-r-21-09-24(180 leaked)** | **79.8** | (-2.2, 1.9) | **627** | | gemma-2-9b-it-sppo-iter3 | 73.6 | (-1.6, 2.2) | 509 | | gemma-2-9b-it | 69.2 | (-2.5, 1.9) | 459 | | t-lite-instruct-0.1 | 64.7 | (-2.1, 1.7) | 810 | | vikhr-llama3.1-8b-instruct-r-21-09-24 | 63.4 | (-2.1, 2.5) | 618 | | suzume-llama-3-8B-multilingual-orpo-borda-half | 57.1 | (-1.9, 2.2) | 682 | | mistral-nemo-instruct-2407 | 50.5 | (-2.7, 2.6) | 403 | | gpt-3.5-turbo-0125 | 50.0 | (0.0, 0.0) | 220 | | c4ai-command-r-v01 | 49.0 | (-1.7, 2.2) | 529 | | meta-llama-3.1-8b-instruct | 43.1 | (-2.8, 2.3) | 628 | #### Результаты на бенчмарке RAG Общий размер тестового сета - 200 примеров, 100 для in_domain вопросов и 100 для out_of_domain. Тут для оценки качества модель-судья gpt-4o была проинструктирована учитывать релеватность и фактологичкскую полноту ответов исходя из документов и реферсного ответа от gpt-4-1106-preview. Подробности промптов и оценок смотрите в коде бенчмарка на [коллабе](https://colab.research.google.com/drive/16730rWQ4-yGqWoooLs0Ece_16frmOniP?usp=sharing) in_domain - вопросы которые связаны с содержанием предоставленных документов в той или иной степени \ out_of_domain - вопросы которые специально никак не связаны с содержанием предоставленных документов <table> <thead> <tr> <th rowspan="2">question_type</th> <th colspan="3">gpt-4o</th> </tr> <tr> <th>judge_correct_percent</th> <th>avg_answer_match_rougeL</th> <th>avg_abs_indexes_diff</th> </tr> </thead> <tbody> <tr> <td>in_domain</td> <td>73%</td> <td>0.34</td> <td>NaN</td> </tr> <tr> <td>out_of_domain</td> <td>81%</td> <td>0.20</td> <td>NaN</td> </tr> </tbody> </table> <table> <thead> <tr> <th style="visibility: hidden;" rowspan="2">question_type</th> <th colspan="3">Vikhr-Nemo-12B-Instruct-R-21-09-24</th> </tr> <tr> <th style="visibility: hidden;">judge_correct_percent</th> <th style="visibility: hidden;">avg_answer_match_rougeL</th> <th style="visibility: hidden;">avg_abs_indexes_diff</th> </tr> </thead> <tbody> <tr> <td>in_domain</td> <td>68%</td> <td>0.41</td> <td>0</td> </tr> <tr> <td>out_of_domain</td> <td>92%</td> <td>0.52</td> <td>0</td> </tr> </tbody> </table> <table> <thead> <tr> <th style="visibility: hidden;" rowspan="2">question_type</th> <th colspan="3">gpt-4o-mini</th> </tr> <tr> <th style="visibility: hidden;">judge_correct_percent</th> <th style="visibility: hidden;">avg_answer_match_rougeL</th> <th style="visibility: hidden;">avg_abs_indexes_diff</th> </tr> </thead> <tbody> <tr> <td>in_domain</td> <td>65%</td> <td>0.33</td> <td>NaN</td> </tr> <tr> <td>out_of_domain</td> <td>73%</td> <td>0.18</td> <td>NaN</td> </tr> </tbody> </table> <table> <thead> <tr> <th style="visibility: hidden;" rowspan="2">question_type</th> <th colspan="3">gpt-3.5-turbo-0125 </th> </tr> <tr> <th style="visibility: hidden;">judge_correct_percent</th> <th style="visibility: hidden;">avg_answer_match_rougeL</th> <th style="visibility: hidden;">avg_abs_indexes_diff</th> </tr> </thead> <tbody> <tr> <td>in_domain</td> <td>49%</td> <td>0.28</td> <td>NaN</td> </tr> <tr> <td>out_of_domain</td> <td>76%</td> <td>0.20</td> <td>NaN</td> </tr> </tbody> </table> ### Как эта модель создавалась #### Инструктивная SFT часть Для SFT этапа обучения модели мы подготовили большой (150к инструкций) инструктивный синтетический датасет [Vikhrmodels/GrandMaster-PRO-MAX](https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX). Его особенностью является встроеный CoT (Chain-Of-Thought), для сбора которого мы использовали модифицированный промет для gpt-4-turbo, подробности в карточке датасета. Кроме того, для того чтобы сделать RAG Grounding, мы подготовили другой синтетический датасет - [Vikhrmodels/Grounded-RAG-RU-v2](https://huggingface.co/datasets/Vikhrmodels/Grounded-RAG-RU-v2) (50k диалогов), его пайплайн сборки достаточно сложный для короткого описания и полробнее об этом вы можете прочитать в его карточке. #### Этап алайнмента с SMPO Для дальнейшего улучшения качества ответов мы использовали следущий пайплайн: 1) Обучили кастомную Reward модель (она пока не будет выкладываться в открытый доступ) 2) Дедуплицировали и отфилтровали используя RM модель оригинальный датасет Vikhrmodels/GrandMaster-PRO-MAX, получив порядка 10к самых высококачественных и разнообразных диалогов. 3) Сделали Rejection Sampling с SFT чекпоинтом используя полученный датасет и Reward модель. (Генерировали 7 гипотез и брали только 2 самые худшие как rejected) 4) Дообучили SFT чекпоинт с помощью нашего метода SMPO используя полученный датасет из этапа 3. SMPO был спроектирован и выбран как метод для повышения стабильности тренировки преференсов в условиях Rejection Sampling и достижения нужного margin. Реализацию SMPO, rejection sampling и тд можно найти в нашей библиотеке [effective_llm_alignment](https://github.com/VikhrModels/effective_llm_alignment/) на GitHub Идея использования именно SMPO, а не другого PO метода, возникла в результате проведения большого количества экспериментов с классическими методами, при необходимости лучшего контроля процесса сходимости. При тщательной настройке других методов (например SimPO), можно добится похожего результата, однако мы постарались стаблизировать этот процесс и объединить лучшие практики из других методов. ### Как работать с RAG Роль documents представляет из себя список словарей с описанием контента документов, с примнением `json.dumps(array, ensure_ascii=False)` (см. пример ниже). \ Контент документов может быть представлен в **3** различных форматах: **Markdown**, **HTML**, **Plain Text**. Контент каждого документа - может быть чанком текста длиной до 4к символов. ```json [ { "doc_id": (0..5), "title": "(null or str)", "content": "(html or markdown or plain text)" } ] ``` #### Пример правильного использования с OpenAI-like API Запуск vLLM сервера: `vllm serve --dtype half --max-model-len 32000 -tp 1 Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 --api-key token-abc123` ```python GROUNDED_SYSTEM_PROMPT = "Your task is to answer the user's questions using only the information from the provided documents. Give two answers to each question: one with a list of relevant document identifiers and the second with the answer to the question itself, using documents with these identifiers." documents = [ { "doc_id": 0, "title": "Глобальное потепление: ледники", "content": "За последние 50 лет объем ледников в мире уменьшился на 30%" }, { "doc_id": 1, "title": "Глобальное потепление: Уровень моря", "content": "Уровень мирового океана повысился на 20 см с 1880 года и продолжает расти на 3,3 мм в год" } ] sample_history = [ {'role': 'system', 'content': GROUNDED_SYSTEM_PROMPT}, {'role': 'documents', 'content': json.dumps(documents, ensure_ascii=False)}, {'role': 'user', 'content': 'Глоабльное потепление'} ] relevant_indexes = llm_client.chat.completions.create( model=llm_model, messages=sample_history, temperature=0.0, max_tokens=2048 ).choices[0].message.content print('Using documents: ' + relevant_indexes + '\n----') final_answer = llm_client.chat.completions.create( model=llm_model, messages=sample_history + [{'role': 'assistant', 'content': relevant_indexes}], temperature=0.3, max_tokens=2048 ).choices[0].message.content print(final_answer) ``` Ответ после выполнения кода будет выглядеть примерно так: Using documents: {"relevant_doc_ids": [0, 1]} ---- Глобальное потепление – это долгосрочное повышение средней температуры атмосферы и океанов Земли. Это явление имеет множество последствий для нашей планеты, и среди них можно выделить два ключевых аспекта, основанных на доступных мне данных: 1. **Уменьшение объема ледников**: За последние 50 лет наблюдается значительное сокращение объема ледников по всему миру. Согласно данным, объем ледников уменьшился на 30%. Это может быть связано с таянием ледников из-за повышения температур, что является одним из признаков глобального потепления. 2. **Повышение уровня моря**: Уровень мирового океана также увеличивается, что связано с таянием ледников и ледяных покровов, а также с расширением воды при повышении температуры. С 1880 года уровень моря повысился на 20 сантиметров, и этот процесс продолжается, с ежегодным увеличением на 3,3 миллиметра. Эти изменения имеют серьезные последствия для экосистем, климата и человеческого общества. Таяние ледников приводит к повышению уровня моря, что может привести к затоплению прибрежных территорий и островов, а также к изменению водных ресурсов и климатических паттернов. Используя первый ответ модели `relevant_indexes` (JSON), можно понять нашла ли модель информацию в документах или нет, она обучена возврашать пустой массив если ее нет и в таком случае она будет отвечать, что не смогла найти информацию в базе знаний (при генерации второго ответа). ### Нюансы и ограничения - Модель имеет **низкий уровень безопасности ответов** и нацелена на правильное и полное выполенние инструкций, имейте это ввиду при использовании и тестируйте самостоятельно. Частично это исправляется системными промптами и дополнительными указаниями о важности безопасности в промпте пользователя. - Системные промпты не предназначены для описание персонажей, мы рекомендуем использовать их для спецификации стиля ответа (вроде "answer only in json format"). Кроме того, желательно, писать их **на английском языке**, так как так было в датасете, от использования английского в системных промтпах не зависит язык ответа. - RAG режим **требует обязательного** наличия системного промпта `GROUNDED_SYSTEM_PROMPT` описаного в секции *Как работать с RAG*. Так же иногда модель может добавлять общую информацию из своих знаний в ответ к той, что есть в документах. - Модель лучше использовать с низкой темптературой (0.1-0.5), а таже использовать top_k (30-50), при температуре 1.0 были замечены случайные дефекты генерации. ### Авторы - Sergei Bratchikov, [NLP Wanderer](https://t.me/nlpwanderer), Vikhr Team - Konstantin Korolev, Vikhr Team - Aleksandr Nikolich, Vikhr Team ### Cite ``` @inproceedings{nikolich2024vikhr, title={Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for {Russian}}, author={Aleksandr Nikolich and Konstantin Korolev and Sergei Bratchikov and Igor Kiselev and Artem Shelmanov }, booktitle = {Proceedings of the 4rd Workshop on Multilingual Representation Learning (MRL) @ EMNLP-2024} year={2024}, publisher = {Association for Computational Linguistics}, url={https://arxiv.org/pdf/2405.13929} } ```
floflodebilbao/T5_ACLsum_all_aspects
floflodebilbao
2025-06-23T13:58:22Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-large", "base_model:finetune:google-t5/t5-large", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-23T13:36:12Z
--- library_name: transformers license: apache-2.0 base_model: t5-large tags: - generated_from_trainer metrics: - rouge - bleu - precision - recall - f1 model-index: - name: T5_ACLsum_all_aspects results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5_ACLsum_all_aspects This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 0.1966 - Rouge2: 0.0527 - Rougel: 0.1539 - Rougelsum: 0.1544 - Gen Len: 20.0 - Bleu: 0.0225 - Precisions: 0.0794 - Brevity Penalty: 0.5477 - Length Ratio: 0.6242 - Translation Length: 4408.0 - Reference Length: 7062.0 - Precision: 0.8598 - Recall: 0.8528 - F1: 0.8562 - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:| | No log | 1.0 | 19 | nan | 0.1966 | 0.0527 | 0.1539 | 0.1544 | 20.0 | 0.0225 | 0.0794 | 0.5477 | 0.6242 | 4408.0 | 7062.0 | 0.8598 | 0.8528 | 0.8562 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | | No log | 2.0 | 38 | nan | 0.1966 | 0.0527 | 0.1539 | 0.1544 | 20.0 | 0.0225 | 0.0794 | 0.5477 | 0.6242 | 4408.0 | 7062.0 | 0.8598 | 0.8528 | 0.8562 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | | No log | 3.0 | 57 | nan | 0.1966 | 0.0527 | 0.1539 | 0.1544 | 20.0 | 0.0225 | 0.0794 | 0.5477 | 0.6242 | 4408.0 | 7062.0 | 0.8598 | 0.8528 | 0.8562 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | | No log | 4.0 | 76 | nan | 0.1966 | 0.0527 | 0.1539 | 0.1544 | 20.0 | 0.0225 | 0.0794 | 0.5477 | 0.6242 | 4408.0 | 7062.0 | 0.8598 | 0.8528 | 0.8562 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
HaoxuanYao/distilbert-rotten-tomatoes
HaoxuanYao
2025-06-23T13:57:56Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-23T13:53:53Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-rotten-tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rotten-tomatoes This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
cpheemagazine/bfc8d0f6-424d-4171-8222-f9d530740286
cpheemagazine
2025-06-23T13:56:27Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-3B", "base_model:adapter:unsloth/Llama-3.2-3B", "license:llama3.2", "region:us" ]
null
2025-06-23T13:50:24Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-3B tags: - axolotl - generated_from_trainer model-index: - name: bfc8d0f6-424d-4171-8222-f9d530740286 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.10.0.dev0` ```yaml adapter: lora base_model: unsloth/Llama-3.2-3B bf16: true datasets: - data_files: - aaedc3b2ee6ebea9_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' eval_max_new_tokens: 128 evals_per_epoch: 4 flash_attention: false fp16: false gradient_accumulation_steps: 1 gradient_checkpointing: true group_by_length: true hub_model_id: cpheemagazine/bfc8d0f6-424d-4171-8222-f9d530740286 learning_rate: 0.0002 load_in_4bit: false logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: false lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 96 micro_batch_size: 16 mlflow_experiment_name: /tmp/aaedc3b2ee6ebea9_train_data.json output_dir: llama3_lora_output rl: null sample_packing: true save_steps: 4 sequence_len: 2048 tf32: true tokenizer_type: AutoTokenizer train_on_inputs: true trl: null trust_remote_code: true wandb_name: 730f6b7a-2c4f-4f01-b7e8-c6eba843ca7d wandb_project: Gradients-On-Demand wandb_run: llama3_h200_run wandb_runid: 730f6b7a-2c4f-4f01-b7e8-c6eba843ca7d warmup_steps: 100 weight_decay: 0.01 ``` </details><br> # bfc8d0f6-424d-4171-8222-f9d530740286 This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 96 ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
suminseo/llama3.1_0623_3
suminseo
2025-06-23T13:56:23Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-23T13:54:34Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** suminseo - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d2000-r16
yu3733
2025-06-23T13:56:13Z
0
0
peft
[ "peft", "safetensors", "paligemma", "lora", "adapter", "visual-question-answering", "image-to-text", "v2.1-enhanced", "en", "base_model:google/paligemma2-3b-mix-224", "base_model:adapter:google/paligemma2-3b-mix-224", "region:us" ]
image-to-text
2025-06-23T13:55:59Z
--- tags: - paligemma - lora - adapter - visual-question-answering - image-to-text - v2.1-enhanced base_model: google/paligemma2-3b-mix-224 language: - en library_name: peft --- # paligemma2-3b-lora-vqa-v21-enhanced-d2000-r16 - v2.1 Enhanced This is a **v2.1 Enhanced** LoRA adapter for PaliGemma-2 3B trained on VQA tasks. ## 🆕 v2.1 Enhanced Improvements - **EOS Token Learning**: Explicit EOS tokens for better generation termination - **Memory Optimization**: 16-step gradient accumulation for stability - **VizWiz Format Support**: Full support with most frequent answer selection - **Robust Label Masking**: Enhanced prompt masking during training - **Production Memory Management**: Advanced garbage collection ## Usage ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from peft import PeftModel import torch from PIL import Image # Base model base_model_id = "google/paligemma2-3b-mix-224" adapter_id = "yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d2000-r16" # Load processor processor = AutoProcessor.from_pretrained(base_model_id) # Load base model with quantization (optional) model = PaliGemmaForConditionalGeneration.from_pretrained( base_model_id, torch_dtype=torch.float16, device_map="auto" ) # Load LoRA adapter model = PeftModel.from_pretrained(model, adapter_id) # Prepare input image = Image.open("your_image.jpg") prompt = "<image>\nQuestion: What is in this image?\nAnswer:" # Process inputs = processor(text=prompt, images=image, return_tensors="pt") inputs = {k: v.to(model.device) for k, v in inputs.items()} # Generate with torch.no_grad(): outputs = model.generate(**inputs, max_new_tokens=20) # Decode print(processor.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Configuration - **Base Model**: google/paligemma2-3b-mix-224 - **LoRA Rank**: 16 - **Training Framework**: PEFT + Transformers - **Optimization**: 4-bit quantization + gradient checkpointing - **Dataset**: VizWiz VQA ## License Same as the base model (see google/paligemma2-3b-mix-224)
newtts2017/t7axt8u7
newtts2017
2025-06-23T13:49:41Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-23T13:30:42Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: t7axt8u7 --- # T7Axt8U7 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `t7axt8u7` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "t7axt8u7", "lora_weights": "https://huggingface.co/newtts2017/t7axt8u7/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('newtts2017/t7axt8u7', weight_name='lora.safetensors') image = pipeline('t7axt8u7').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/newtts2017/t7axt8u7/discussions) to add images that show off what you’ve made with this LoRA.
danaroth/zhang-image-restoration
danaroth
2025-06-23T13:47:21Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-06-23T13:41:27Z
--- license: mit --- # Description This repository contains models for image restoration collected from the following sources: <https://github.com/cszn/IRCNN> <https://github.com/cszn/DPIR> <https://github.com/yuanzhi-zhu/DiffPIR> # Citation If you use this models, please cite: ```bibtex @inproceedings{zhang2017learning, title={Learning Deep CNN Denoiser Prior for Image Restoration}, author={Zhang, Kai and Zuo, Wangmeng and Gu, Shuhang and Zhang, Lei}, booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, pages={3929--3938}, year={2017}, } ``` ```bibtex @article{zhang2021plug, title={Plug-and-Play Image Restoration with Deep Denoiser Prior}, author={Zhang, Kai and Li, Yawei and Zuo, Wangmeng and Zhang, Lei and Van Gool, Luc and Timofte, Radu}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume={44}, number={10}, pages={6360-6376}, year={2021} } ``` ```bibtex @inproceedings{zhu2023denoising, % DiffPIR title={Denoising Diffusion Models for Plug-and-Play Image Restoration}, author={Yuanzhi Zhu and Kai Zhang and Jingyun Liang and Jiezhang Cao and Bihan Wen and Radu Timofte and Luc Van Gool}, booktitle={IEEE Conference on Computer Vision and Pattern Recognition Workshops (NTIRE)}, year={2023}, } ```
fareedaidil/lora_model
fareedaidil
2025-06-23T13:44:10Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/phi-4-unsloth-bnb-4bit", "base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-22T05:44:12Z
--- base_model: unsloth/phi-4-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** fareedaidil - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
GeorgeUwaifo/distilgpt2-gitek-finetuned-wikitext2
GeorgeUwaifo
2025-06-23T13:43:20Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-23T13:42:19Z
--- library_name: transformers license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: distilgpt2-gitek-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-gitek-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7194 | 1.0 | 2334 | 3.6663 | | 3.6195 | 2.0 | 4668 | 3.6462 | | 3.5733 | 3.0 | 7002 | 3.6425 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
sungkwan2/layoutlmv2-base-uncased_finetuned
sungkwan2
2025-06-23T13:42:38Z
3
0
transformers
[ "transformers", "tensorboard", "safetensors", "layoutlmv2", "document-question-answering", "generated_from_trainer", "base_model:microsoft/layoutlmv2-base-uncased", "base_model:finetune:microsoft/layoutlmv2-base-uncased", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
document-question-answering
2025-06-22T17:07:00Z
--- library_name: transformers license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv2-base-uncased tags: - generated_from_trainer model-index: - name: layoutlmv2-base-uncased_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-base-uncased_finetuned This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
kobaIK/ppo-LunarLander-v2
kobaIK
2025-06-23T13:41:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-23T13:16:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 275.33 +/- 14.06 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sivanimohan/my-bio-bert-qa-model
sivanimohan
2025-06-23T13:41:06Z
17
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
question-answering
2025-06-21T15:47:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MattMcG/titles_wee_qwen_split_only
MattMcG
2025-06-23T13:41:02Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-23T13:39:50Z
--- base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** MattMcG - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
adv010/bert-base-emotion-intent
adv010
2025-06-23T13:40:15Z
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-22T18:54:51Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-emotion-intent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-emotion-intent This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1434 - Accuracy: 0.9355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3946 | 1.0 | 1000 | 0.1669 | 0.933 | | 0.1193 | 2.0 | 2000 | 0.1434 | 0.9355 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
CDL-RecSys/blip2-opt-2.7b-hm
CDL-RecSys
2025-06-23T13:40:14Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Salesforce/blip2-opt-2.7b", "base_model:adapter:Salesforce/blip2-opt-2.7b", "region:us" ]
null
2025-06-23T12:57:06Z
--- base_model: Salesforce/blip2-opt-2.7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
eyepyon/judicial-exam-llama3-jpv5-merged-v3
eyepyon
2025-06-23T13:36:41Z
0
0
peft
[ "peft", "safetensors", "llama", "japanese", "legal", "judicial-exam", "司法試験", "fine-tuned", "lora", "ja", "dataset:custom-judicial-exam-dataset", "base_model:elyza/Llama-3-ELYZA-JP-8B", "base_model:adapter:elyza/Llama-3-ELYZA-JP-8B", "license:llama3", "region:us" ]
null
2025-06-23T13:33:42Z
--- license: llama3 base_model: elyza/Llama-3-ELYZA-JP-8B tags: - japanese - legal - judicial-exam - 司法試験 - fine-tuned - llama - peft - lora language: - ja datasets: - custom-judicial-exam-dataset --- # 司法試験特化日本語LLM ## モデル概要 このモデルはelyza/Llama-3-ELYZA-JP-8Bをベースに、日本の司法試験問題でファインチューニングした特化モデルです。 ## 特徴 - **ベースモデル**: elyza/Llama-3-ELYZA-JP-8B - **特化分野**: 日本の司法試験(憲法、民法、刑法等) - **言語**: 日本語 - **ファインチューニング手法**: QLoRA (Quantized Low-Rank Adaptation) ## 学習情報 - **学習データ数**: 399件 - **エポック数**: 1 - **学習時間**: 0:04:03.975761 - **LoRA ランク**: 4 - **学習率**: 1e-05 ## 使用方法 ### LoRAアダプター版(eyepyon/judicial-exam-llama3-jpv5-lora-v3) ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel base_model = AutoModelForCausalLM.from_pretrained("elyza/Llama-3-ELYZA-JP-8B") tokenizer = AutoTokenizer.from_pretrained("elyza/Llama-3-ELYZA-JP-8B") model = PeftModel.from_pretrained(base_model, "eyepyon/judicial-exam-llama3-jpv5-lora-v3") inputs = tokenizer("司法試験問題:", return_tensors="pt") outputs = model.generate(**inputs, max_length=512) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### マージ済みモデル版(eyepyon/judicial-exam-llama3-jpv5-merged-v3) ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("eyepyon/judicial-exam-llama3-jpv5-merged-v3") tokenizer = AutoTokenizer.from_pretrained("eyepyon/judicial-exam-llama3-jpv5-merged-v3") inputs = tokenizer("司法試験問題:", return_tensors="pt") outputs = model.generate(**inputs, max_length=512) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## 注意事項 - このモデルは教育・研究目的で作成されています - 実際の司法試験や法的判断には使用しないでください - 出力結果は参考程度に留めてください ## ライセンス ベースモデルのLlama 3ライセンスに準拠します。 ---
GoshKolotyan/w2v-bert-2.0-Armenian
GoshKolotyan
2025-06-23T13:33:29Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-23T10:21:28Z
--- library_name: transformers license: mit base_model: facebook/w2v-bert-2.0 tags: - generated_from_trainer datasets: - common_voice_17_0 model-index: - name: w2v-bert-2.0-armenian-new-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-armenian-new-dataset This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1424 - eval_wer: 0.1440 - eval_cer: 0.0254 - eval_runtime: 214.2499 - eval_samples_per_second: 19.981 - eval_steps_per_second: 2.502 - epoch: 6.7508 - step: 1100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
tamazightdev/lora_model
tamazightdev
2025-06-23T13:26:30Z
0
0
null
[ "safetensors", "unsloth", "license:mit", "region:us" ]
null
2025-06-23T13:25:29Z
--- license: mit tags: - unsloth ---
bharatwalejain/gemma-news-retrieval
bharatwalejain
2025-06-23T13:24:58Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-23T13:22:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlx-community/ivrit-ai-whisper-large-v3-mlx
mlx-community
2025-06-23T13:24:00Z
0
0
mlx
[ "mlx", "whisper", "he", "base_model:ivrit-ai/whisper-large-v3", "base_model:finetune:ivrit-ai/whisper-large-v3", "region:us" ]
null
2025-06-19T12:45:08Z
--- library_name: mlx language: - he base_model: - ivrit-ai/whisper-large-v3 --- # ivrit-ai-whisper-large-v3-mlx This model was converted to MLX format from [`ivrit-ai/whisper-large-v3`](). ## Use with mlx ```bash pip install mlx-whisper ``` ```python import mlx_whisper result = mlx_whisper.transcribe( "FILE_NAME", path_or_hf_repo=mlx-community/ivrit-ai-whisper-large-v3-mlx, ) ```
yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d4000-r4
yu3733
2025-06-23T13:23:40Z
0
0
peft
[ "peft", "safetensors", "paligemma", "lora", "adapter", "visual-question-answering", "image-to-text", "v2.1-enhanced", "en", "base_model:google/paligemma2-3b-mix-224", "base_model:adapter:google/paligemma2-3b-mix-224", "region:us" ]
image-to-text
2025-06-23T13:23:27Z
--- tags: - paligemma - lora - adapter - visual-question-answering - image-to-text - v2.1-enhanced base_model: google/paligemma2-3b-mix-224 language: - en library_name: peft --- # paligemma2-3b-lora-vqa-v21-enhanced-d4000-r4 - v2.1 Enhanced This is a **v2.1 Enhanced** LoRA adapter for PaliGemma-2 3B trained on VQA tasks. ## 🆕 v2.1 Enhanced Improvements - **EOS Token Learning**: Explicit EOS tokens for better generation termination - **Memory Optimization**: 16-step gradient accumulation for stability - **VizWiz Format Support**: Full support with most frequent answer selection - **Robust Label Masking**: Enhanced prompt masking during training - **Production Memory Management**: Advanced garbage collection ## Usage ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from peft import PeftModel import torch from PIL import Image # Base model base_model_id = "google/paligemma2-3b-mix-224" adapter_id = "yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d4000-r4" # Load processor processor = AutoProcessor.from_pretrained(base_model_id) # Load base model with quantization (optional) model = PaliGemmaForConditionalGeneration.from_pretrained( base_model_id, torch_dtype=torch.float16, device_map="auto" ) # Load LoRA adapter model = PeftModel.from_pretrained(model, adapter_id) # Prepare input image = Image.open("your_image.jpg") prompt = "<image>\nQuestion: What is in this image?\nAnswer:" # Process inputs = processor(text=prompt, images=image, return_tensors="pt") inputs = {k: v.to(model.device) for k, v in inputs.items()} # Generate with torch.no_grad(): outputs = model.generate(**inputs, max_new_tokens=20) # Decode print(processor.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Configuration - **Base Model**: google/paligemma2-3b-mix-224 - **LoRA Rank**: 4 - **Training Framework**: PEFT + Transformers - **Optimization**: 4-bit quantization + gradient checkpointing - **Dataset**: VizWiz VQA ## License Same as the base model (see google/paligemma2-3b-mix-224)
qowiejio/rag-model
qowiejio
2025-06-23T13:22:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-23T13:22:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HAO-AI/cdtron-cognitive-decline
HAO-AI
2025-06-23T13:20:33Z
6
0
null
[ "safetensors", "megatron-bert", "clinical-nlp", "cognitive-decline", "electronic-health-records", "transformer", "medical-ai", "healthcare", "en", "license:mit", "region:us" ]
null
2025-06-22T19:46:53Z
--- license: mit language: - en tags: - clinical-nlp - cognitive-decline - electronic-health-records - transformer - medical-ai - healthcare --- # CD-Tron: Cognitive Decline Detection from EHR using Large Clinical Language Model **Model Name:** CD-Tron ## Model Description CD-Tron is a fine-tuned large clinical language model based on [GatorTron](https://huggingface.co/UFNLP/gatortron-base) for the task of detecting cognitive decline from free-text clinical notes. The model was fine-tuned on real-world clinical data, and synthetic data can be used for demonstration. --- ## Intended Use - Task: Cognitive decline detection / screening - Input: Free-text clinical notes (EHR sections, progress notes, discharge summaries, etc.) - Output: Binary classification: - 0 = No cognitive decline - 1 = Cognitive decline detected This model is for research purposes and proof-of-concept demonstration. --- ## How to Use Example code to load and run inference: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("HAO-AI/cdtron-cognitive-decline") model = AutoModelForSequenceClassification.from_pretrained("HAO-AI/cdtron-cognitive-decline") text = "Patient presents with recent memory loss, confusion, and impaired attention..." inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512) outputs = model(**inputs) prediction = outputs.logits.argmax(dim=1).item() print("Predicted label:", prediction) ``` --- ## Citation If you find this work useful, please cite: ```bibtex @article{guan2025cd, title={CD-Tron: Leveraging large clinical language model for early detection of cognitive decline from electronic health records}, author={Guan, Hao and Novoa-Laurentiev, John and Zhou, Li}, journal={Journal of Biomedical Informatics}, pages={104830}, year={2025}, publisher={Elsevier} }
NishithR/Pyramids
NishithR
2025-06-23T13:19:24Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2025-06-23T13:19:21Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: NishithR/Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-2-seed-28-2025-06-23
morturr
2025-06-23T13:18:56Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-23T13:18:39Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-2-seed-28-2025-06-23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-2-seed-28-2025-06-23 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-1-seed-42-2025-06-23
morturr
2025-06-23T13:16:35Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-23T13:16:27Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-1-seed-42-2025-06-23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-1-seed-42-2025-06-23 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
floflodebilbao/T5_sum_outcome2
floflodebilbao
2025-06-23T13:15:10Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-large", "base_model:finetune:google-t5/t5-large", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-23T13:13:32Z
--- library_name: transformers license: apache-2.0 base_model: t5-large tags: - generated_from_trainer metrics: - rouge - bleu - precision - recall - f1 model-index: - name: T5_sum_outcome2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5_sum_outcome2 This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 0.1292 - Rouge2: 0.0094 - Rougel: 0.0993 - Rougelsum: 0.0994 - Gen Len: 20.0 - Bleu: 0.0 - Precisions: 0.0455 - Brevity Penalty: 0.553 - Length Ratio: 0.628 - Translation Length: 736.0 - Reference Length: 1172.0 - Precision: 0.8487 - Recall: 0.8472 - F1: 0.8478 - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:----:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:| | No log | 1.0 | 7 | nan | 0.1292 | 0.0094 | 0.0993 | 0.0994 | 20.0 | 0.0 | 0.0455 | 0.553 | 0.628 | 736.0 | 1172.0 | 0.8487 | 0.8472 | 0.8478 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | | No log | 2.0 | 14 | nan | 0.1292 | 0.0094 | 0.0993 | 0.0994 | 20.0 | 0.0 | 0.0455 | 0.553 | 0.628 | 736.0 | 1172.0 | 0.8487 | 0.8472 | 0.8478 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | | No log | 3.0 | 21 | nan | 0.1292 | 0.0094 | 0.0993 | 0.0994 | 20.0 | 0.0 | 0.0455 | 0.553 | 0.628 | 736.0 | 1172.0 | 0.8487 | 0.8472 | 0.8478 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | | No log | 4.0 | 28 | nan | 0.1292 | 0.0094 | 0.0993 | 0.0994 | 20.0 | 0.0 | 0.0455 | 0.553 | 0.628 | 736.0 | 1172.0 | 0.8487 | 0.8472 | 0.8478 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
Alecardo/test23-6-6859513392dd80aceb629b9e
Alecardo
2025-06-23T13:12:50Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-23T13:05:55Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Test23 6 6859513392Dd80Aceb629B9E <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/Alecardo/test23-6-6859513392dd80aceb629b9e/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Alecardo/test23-6-6859513392dd80aceb629b9e', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Alecardo/test23-6-6859513392dd80aceb629b9e/discussions) to add images that show off what you’ve made with this LoRA.
ishk9999/routing-gemma-3-1b-mimic-cxr-dataset-fine-tuning-mk-2
ishk9999
2025-06-23T13:11:19Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-it", "base_model:finetune:google/gemma-3-1b-it", "endpoints_compatible", "region:us" ]
null
2025-06-23T12:46:57Z
--- base_model: google/gemma-3-1b-it library_name: transformers model_name: routing-gemma-3-1b-mimic-cxr-dataset-fine-tuning-mk-2 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for routing-gemma-3-1b-mimic-cxr-dataset-fine-tuning-mk-2 This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ishk9999/routing-gemma-3-1b-mimic-cxr-dataset-fine-tuning-mk-2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
NJUDeepEngine/llm_based_atp
NJUDeepEngine
2025-06-23T13:09:48Z
10
1
null
[ "safetensors", "qwen2", "lean4", "theorem-proving", "formal-mathematics", "text-generation", "conversational", "en", "dataset:internlm/Lean-Workbook", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "license:apache-2.0", "region:us" ]
text-generation
2025-05-17T13:21:28Z
--- license: apache-2.0 datasets: - internlm/Lean-Workbook language: - en base_model: - Qwen/Qwen2.5-Math-7B tags: - lean4 - theorem-proving - formal-mathematics metrics: - accuracy pipeline_tag: text-generation --- # LLM-based Automated Theorem Proving Hinges on Scalable Synthetic Data Generation This repository contains the model used in the paper *"LLM-based Automated Theorem Proving Hinges on Scalable Synthetic Data Generation"*. ## Model The model is full-tuned based on [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B). ## Usage Please refer to [GitHub page](https://github.com/NJUDeepEngine/llm_based_atp) for details.
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-2-seed-28-2025-06-23
morturr
2025-06-23T13:07:01Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-23T13:06:53Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-2-seed-28-2025-06-23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-2-seed-28-2025-06-23 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
floflodebilbao/T5_sum_approach2
floflodebilbao
2025-06-23T13:06:58Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-large", "base_model:finetune:google-t5/t5-large", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-23T12:51:22Z
--- library_name: transformers license: apache-2.0 base_model: t5-large tags: - generated_from_trainer metrics: - rouge - bleu - precision - recall - f1 model-index: - name: T5_sum_approach2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5_sum_approach2 This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 0.2557 - Rouge2: 0.1023 - Rougel: 0.2155 - Rougelsum: 0.2169 - Gen Len: 20.0 - Bleu: 0.0544 - Precisions: 0.1339 - Brevity Penalty: 0.5174 - Length Ratio: 0.6028 - Translation Length: 736.0 - Reference Length: 1221.0 - Precision: 0.8714 - Recall: 0.8604 - F1: 0.8658 - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:| | No log | 1.0 | 7 | nan | 0.2557 | 0.1023 | 0.2155 | 0.2169 | 20.0 | 0.0544 | 0.1339 | 0.5174 | 0.6028 | 736.0 | 1221.0 | 0.8714 | 0.8604 | 0.8658 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | | No log | 2.0 | 14 | nan | 0.2557 | 0.1023 | 0.2155 | 0.2169 | 20.0 | 0.0544 | 0.1339 | 0.5174 | 0.6028 | 736.0 | 1221.0 | 0.8714 | 0.8604 | 0.8658 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | | No log | 3.0 | 21 | nan | 0.2557 | 0.1023 | 0.2155 | 0.2169 | 20.0 | 0.0544 | 0.1339 | 0.5174 | 0.6028 | 736.0 | 1221.0 | 0.8714 | 0.8604 | 0.8658 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | | No log | 4.0 | 28 | nan | 0.2557 | 0.1023 | 0.2155 | 0.2169 | 20.0 | 0.0544 | 0.1339 | 0.5174 | 0.6028 | 736.0 | 1221.0 | 0.8714 | 0.8604 | 0.8658 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
John6666/pony-semi-realistic-v10-sdxl
John6666
2025-06-23T13:06:38Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "semi-realistic", "anime", "2.5D", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-23T13:00:50Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - semi-realistic - anime - 2.5D - pony --- Original model is [here](https://civitai.com/models/1708096/pony-semi-realistic?modelVersionId=1932962). This model created by [shishu21](https://civitai.com/user/shishu21).
fty7i/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
fty7i
2025-06-23T13:06:38Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am pensive powerful koala", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T02:44:33Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am pensive powerful koala - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fty7i/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
tscstudios/iwal7zawwerd8k7vjzyubn9guup1_39744c6c-2f78-4bc0-a40a-cad55481cdef
tscstudios
2025-06-23T13:01:43Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-23T13:01:42Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Iwal7Zawwerd8K7Vjzyubn9Guup1_39744C6C 2F78 4Bc0 A40A Cad55481Cdef <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/tscstudios/iwal7zawwerd8k7vjzyubn9guup1_39744c6c-2f78-4bc0-a40a-cad55481cdef/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('tscstudios/iwal7zawwerd8k7vjzyubn9guup1_39744c6c-2f78-4bc0-a40a-cad55481cdef', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/tscstudios/iwal7zawwerd8k7vjzyubn9guup1_39744c6c-2f78-4bc0-a40a-cad55481cdef/discussions) to add images that show off what you’ve made with this LoRA.
ooloteam/last_text_classifier
ooloteam
2025-06-23T13:00:17Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-23T12:59:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Gardolir/my_characters_flux
Gardolir
2025-06-23T12:59:51Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-11-05T19:39:14Z
--- license: apache-2.0 ---
poojastl2024/lora-whisper-large-v3-bengali-new
poojastl2024
2025-06-23T12:59:48Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-23T11:52:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Baselhany/Graduation_Project_distillation_Whisper_base222
Baselhany
2025-06-23T12:57:03Z
45
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ar", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-22T14:27:03Z
--- library_name: transformers language: - ar license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer metrics: - wer model-index: - name: Whisper base AR - BA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper base AR - BA This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset. It achieves the following results on the evaluation set: - Loss: 0.0046 - Wer: 0.0687 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.7293 | 1.0 | 1174 | 0.0073 | 0.0842 | | 3.2433 | 2.0 | 2348 | 0.0065 | 0.0817 | | 2.1238 | 3.0 | 3522 | 0.0064 | 0.0734 | | 1.6646 | 4.0 | 4696 | 0.0061 | 0.0751 | | 1.4589 | 5.0 | 5870 | 0.0060 | 0.0706 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
cesarali/ContextVAENodePK_cluster
cesarali
2025-06-23T12:52:15Z
256
0
generative-pk
[ "generative-pk", "pytorch", "node_pk", "generative", "en", "dataset:simulated", "license:apache-2.0", "region:us" ]
null
2025-06-14T12:51:22Z
--- language: - en license: apache-2.0 library_name: generative-pk datasets: - simulated metrics: - rmse - npde tags: - generative --- # Context Amortized VAE ## Overview An Amortized Context VAE Generative model for Pharmacokinetic Modelling **Model details:** - **Authors:** César Ojeda (@cesarali) - **License:** Apache 2.0 ## Intended use Sample Drug Concentration Behavior
phospho-app/Schmidie-ACT_BBOX-eyes-4c455
phospho-app
2025-06-23T12:51:52Z
0
0
null
[ "phosphobot", "act", "region:us" ]
null
2025-06-23T12:50:45Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` The object 'Lege die Medikamenten Packung von der rechten Seite zur linken Seite' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/Schmidie/eyes/ and rephrase the instruction. ``` ## Training parameters: - **Dataset**: [Schmidie/eyes](https://huggingface.co/datasets/Schmidie/eyes) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
visolex/visobert-spam-binary
visolex
2025-06-23T12:51:38Z
0
0
null
[ "safetensors", "xlm-roberta", "spam-detection", "vietnamese", "transformer", "text-classification", "vi", "dataset:visolex/ViSpamReviews", "base_model:uitnlp/visobert", "base_model:finetune:uitnlp/visobert", "license:apache-2.0", "model-index", "region:us" ]
text-classification
2025-06-23T07:03:50Z
--- language: vi tags: - spam-detection - vietnamese - transformer license: apache-2.0 datasets: - visolex/ViSpamReviews metrics: - accuracy - f1 model-index: - name: visobert-spam-binary results: - task: type: text-classification name: Spam Detection (Binary) dataset: name: ViSpamReviews type: custom metrics: - name: Accuracy type: accuracy value: <INSERT_ACCURACY> - name: F1 Score type: f1 value: <INSERT_F1_SCORE> base_model: - uitnlp/visobert pipeline_tag: text-classification --- # ViSoBERT-Spam-Binary Fine-tuned from [`uitnlp/visobert`](https://huggingface.co/uitnlp/visobert) on **ViSpamReviews** for **binary** spam detection. * **Task**: Binary classification (`Label`: 0 = non-spam, 1 = spam) * **Dataset**: [ViSpamReviews](https://huggingface.co/datasets/visolex/ViSpamReviews) * **Hyperparameters** * Batch size: 32 * LR: 3e-5 * Epochs: 100 * Max seq len: 256 ## Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("visolex/visobert-spam-binary") model = AutoModelForSequenceClassification.from_pretrained("visolex/visobert-spam-binary") text = "Đây là đánh giá tuyệt vời!" inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=256) pred = model(**inputs).logits.argmax(dim=-1).item() print("Spam" if pred==1 else "Non-spam") ```
unsloth/Mistral-Small-3.2-24B-Instruct-2506-bnb-4bit
unsloth
2025-06-23T12:51:18Z
204
2
vllm
[ "vllm", "safetensors", "mistral3", "image-text-to-text", "conversational", "en", "fr", "de", "es", "pt", "it", "ja", "ko", "ru", "zh", "ar", "fa", "id", "ms", "ne", "pl", "ro", "sr", "sv", "tr", "uk", "vi", "hi", "bn", "base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506", "base_model:quantized:mistralai/Mistral-Small-3.2-24B-Instruct-2506", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
image-text-to-text
2025-06-21T00:48:05Z
--- language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Mistral-Small-3.2-24B-Instruct-2506 extra_gated_description: >- If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. pipeline_tag: image-text-to-text --- # Mistral-Small-3.2-24B-Instruct-2506 Mistral-Small-3.2-24B-Instruct-2506 is a minor update of [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503). Small-3.2 improves in the following categories: - **Instruction following**: Small-3.2 is better at following precise instructions - **Repetition errors**: Small-3.2 produces less infinite generations or repetitive answers - **Function calling**: Small-3.2's function calling template is more robust (see [here](https://github.com/mistralai/mistral-common/blob/535b4d0a0fc94674ea17db6cf8dc2079b81cbcfa/src/mistral_common/tokens/tokenizers/instruct.py#L778) and [examples](#function-calling)) In all other categories Small-3.2 should match or slightly improve compared to [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503). ## Key Features - same as [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503#key-features) ## Benchmark Results We compare Mistral-Small-3.2-24B to [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503). For more comparison against other models of similar size, please check [Mistral-Small-3.1's Benchmarks'](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503#benchmark-results) ### Text #### Instruction Following / Chat / Tone | Model | Wildbench v2 | Arena Hard v2 | IF (Internal; accuracy) | |-------|---------------|---------------|------------------------| | Small 3.1 24B Instruct | 55.6% | 19.56% | 82.75% | | **Small 3.2 24B Instruct** | **65.33%** | **43.1%** | **84.78%** | #### Infinite Generations Small 3.2 reduces infitine generations by 2x on challenging, long and repetitive prompts. | Model | Infinite Generations (Internal; Lower is better) | |-------|-------| | Small 3.1 24B Instruct | 2.11% | | **Small 3.2 24B Instruct** | **1.29%** | #### STEM | Model | MMLU | MMLU Pro (5-shot CoT) | MATH | GPQA Main (5-shot CoT) | GPQA Diamond (5-shot CoT )| MBPP Plus - Pass@5 | HumanEval Plus - Pass@5 | SimpleQA (TotalAcc)| |--------------------------------|-----------|-----------------------|------------------------|------------------------|---------------------------|--------------------|-------------------------|--------------------| | Small 3.1 24B Instruct | 80.62% | 66.76% | 69.30% | 44.42% | 45.96% | 74.63% | 88.99% | 10.43% | | **Small 3.2 24B Instruct** | 80.50% | **69.06%** | 69.42% | 44.22% | 46.13% | **78.33%** | **92.90%** | **12.10%** | ### Vision | Model | MMMU | Mathvista | ChartQA | DocVQA | AI2D | |--------------------------------|------------|-----------|-----------|-----------|-----------| | Small 3.1 24B Instruct | **64.00%** | **68.91%**| 86.24% | 94.08% | 93.72% | | **Small 3.2 24B Instruct** | 62.50% | 67.09% | **87.4%** | 94.86% | 92.91% | ## Usage The model can be used with the following frameworks; - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended) - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers) **Note 1**: We recommend using a relatively low temperature, such as `temperature=0.15`. **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend to use the one provided in the [SYSTEM_PROMPT.txt](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506/blob/main/SYSTEM_PROMPT.txt) file. ### vLLM (recommended) We recommend using this model with [vLLM](https://github.com/vllm-project/vllm). #### Installation Make sure to install [`vLLM >= 0.9.1`](https://github.com/vllm-project/vllm/releases/tag/v0.9.1): ``` pip install vllm --upgrade ``` Doing so should automatically install [`mistral_common >= 1.6.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.6.2). To check: ``` python -c "import mistral_common; print(mistral_common.__version__)" ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Serve We recommand that you use Mistral-Small-3.2-24B-Instruct-2506 in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Mistral-Small-3.2-24B-Instruct-2506 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --limit_mm_per_prompt 'image=10' --tensor-parallel-size 2 ``` **Note:** Running Mistral-Small-3.2-24B-Instruct-2506 on GPU requires ~55 GB of GPU RAM in bf16 or fp16. 2. To ping the client you can use a simple Python snippet. See the following examples. #### Vision reasoning Take leverage of the vision capabilities of Mistral-Small-3.2-24B-Instruct-2506 to take the best choice given a scenario, go catch them all ! <details> <summary>Python snippet</summary> ```py from datetime import datetime, timedelta from openai import OpenAI from huggingface_hub import hf_hub_download # Modify OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" TEMP = 0.15 MAX_TOK = 131072 client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) models = client.models.list() model = models.data[0].id def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438" messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.", }, {"type": "image_url", "image_url": {"url": image_url}}, ], }, ] response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, ) print(response.choices[0].message.content) # In this situation, you are playing a Pokémon game where your Pikachu (Level 42) is facing a wild Pidgey (Level 17). Here are the possible actions you can take and an analysis of each: # 1. **FIGHT**: # - **Pros**: Pikachu is significantly higher level than the wild Pidgey, which suggests that it should be able to defeat Pidgey easily. This could be a good opportunity to gain experience points and possibly items or money. # - **Cons**: There is always a small risk of Pikachu fainting, especially if Pidgey has a powerful move or a status effect that could hinder Pikachu. However, given the large level difference, this risk is minimal. # 2. **BAG**: # - **Pros**: You might have items in your bag that could help in this battle, such as Potions, Poké Balls, or Berries. Using an item could help you capture the Pidgey or heal your Pikachu if needed. # - **Cons**: Using items might not be necessary given the level difference. It could be more efficient to just fight and defeat the Pidgey quickly. # 3. **POKÉMON**: # - **Pros**: You might have another Pokémon in your party that is better suited for this battle or that you want to gain experience. Switching Pokémon could also be a strategic move if you want to train a lower-level Pokémon. # - **Cons**: Switching Pokémon might not be necessary since Pikachu is at a significant advantage. It could also waste time and potentially give Pidgey a turn to attack. # 4. **RUN**: # - **Pros**: Running away could save time and conserve your Pokémon's health and resources. If you are in a hurry or do not need the experience or items, running away is a safe option. # - **Cons**: Running away means you miss out on the experience points and potential items or money that you could gain from defeating the Pidgey. It also means you do not get the chance to capture the Pidgey if you wanted to. # ### Recommendation: # Given the significant level advantage, the best action is likely to **FIGHT**. This will allow you to quickly defeat the Pidgey, gain experience points, and potentially earn items or money. If you are concerned about Pikachu's health, you could use an item from your **BAG** to heal it before or during the battle. Running away or switching Pokémon does not seem necessary in this situation. ``` </details> #### Function calling Mistral-Small-3.2-24B-Instruct-2506 is excellent at function / tool calling tasks via vLLM. *E.g.:* <details> <summary>Python snippet - easy</summary> ```py from openai import OpenAI from huggingface_hub import hf_hub_download # Modify OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" TEMP = 0.15 MAX_TOK = 131072 client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) models = client.models.list() model = models.data[0].id def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") image_url = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/europe.png" tools = [ { "type": "function", "function": { "name": "get_current_population", "description": "Get the up-to-date population of a given country.", "parameters": { "type": "object", "properties": { "country": { "type": "string", "description": "The country to find the population of.", }, "unit": { "type": "string", "description": "The unit for the population.", "enum": ["millions", "thousands"], }, }, "required": ["country", "unit"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.", }, { "role": "assistant", "content": "", "tool_calls": [ { "id": "bbc5b7ede", "type": "function", "function": { "name": "rewrite", "arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}', }, } ], }, { "role": "tool", "content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}', "tool_call_id": "bbc5b7ede", "name": "rewrite", }, { "role": "assistant", "content": "---\n\nOpenAI is a FOR-profit company.", }, { "role": "user", "content": [ { "type": "text", "text": "Can you tell me what is the biggest country depicted on the map?", }, { "type": "image_url", "image_url": { "url": image_url, }, }, ], } ] response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, tools=tools, tool_choice="auto", ) assistant_message = response.choices[0].message.content print(assistant_message) # The biggest country depicted on the map is Russia. messages.extend([ {"role": "assistant", "content": assistant_message}, {"role": "user", "content": "What is the population of that country in millions?"}, ]) response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, tools=tools, tool_choice="auto", ) print(response.choices[0].message.tool_calls) # [ChatCompletionMessageToolCall(id='3e92V6Vfo', function=Function(arguments='{"country": "Russia", "unit": "millions"}', name='get_current_population'), type='function')] ``` </details> <details> <summary>Python snippet - complex</summary> ```python import json from openai import OpenAI from huggingface_hub import hf_hub_download # Modify OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" TEMP = 0.15 MAX_TOK = 131072 client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) models = client.models.list() model = models.data[0].id def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") image_url = "https://math-coaching.com/img/fiche/46/expressions-mathematiques.jpg" def my_calculator(expression: str) -> str: return str(eval(expression)) tools = [ { "type": "function", "function": { "name": "my_calculator", "description": "A calculator that can evaluate a mathematical expression.", "parameters": { "type": "object", "properties": { "expression": { "type": "string", "description": "The mathematical expression to evaluate.", }, }, "required": ["expression"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "Can you calculate the results for all the equations displayed in the image? Only compute the ones that involve numbers.", }, { "type": "image_url", "image_url": { "url": image_url, }, }, ], }, ] response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, tools=tools, tool_choice="auto", ) tool_calls = response.choices[0].message.tool_calls print(tool_calls) # [ChatCompletionMessageToolCall(id='CyQBSAtGh', function=Function(arguments='{"expression": "6 + 2 * 3"}', name='my_calculator'), type='function'), ChatCompletionMessageToolCall(id='KQqRCqvzc', function=Function(arguments='{"expression": "19 - (8 + 2) + 1"}', name='my_calculator'), type='function')] results = [] for tool_call in tool_calls: function_name = tool_call.function.name function_args = tool_call.function.arguments if function_name == "my_calculator": result = my_calculator(**json.loads(function_args)) results.append(result) messages.append({"role": "assistant", "tool_calls": tool_calls}) for tool_call, result in zip(tool_calls, results): messages.append( { "role": "tool", "tool_call_id": tool_call.id, "name": tool_call.function.name, "content": result, } ) response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, ) print(response.choices[0].message.content) # Here are the results for the equations that involve numbers: # 1. \( 6 + 2 \times 3 = 12 \) # 3. \( 19 - (8 + 2) + 1 = 10 \) # For the other equations, you need to substitute the variables with specific values to compute the results. ``` </details> #### Instruction following Mistral-Small-3.2-24B-Instruct-2506 will follow your instructions down to the last letter ! <details> <summary>Python snippet</summary> ```python from openai import OpenAI from huggingface_hub import hf_hub_download # Modify OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" TEMP = 0.15 MAX_TOK = 131072 client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) models = client.models.list() model = models.data[0].id def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Write me a sentence where every word starts with the next letter in the alphabet - start with 'a' and end with 'z'.", }, ] response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, ) assistant_message = response.choices[0].message.content print(assistant_message) # Here's a sentence where each word starts with the next letter of the alphabet, starting from 'a' and ending with 'z': # "Always brave cats dance elegantly, fluffy giraffes happily ignore jungle kites, lovingly munching nuts, observing playful quails racing swiftly, tiny unicorns vaulting while xylophones yodel zealously." # This sentence follows the sequence from A to Z without skipping any letters. ``` </details> ### Transformers You can also use Mistral-Small-3.2-24B-Instruct-2506 with `Transformers` ! To make the best use of our model with `Transformers` make sure to have [installed](https://github.com/mistralai/mistral-common) `mistral-common >= 1.6.2` to use our tokenizer. ```bash pip install mistral-common --upgrade ``` Then load our tokenizer along with the model and generate: <details> <summary>Python snippet</summary> ```python from datetime import datetime, timedelta import torch from mistral_common.protocol.instruct.request import ChatCompletionRequest from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from huggingface_hub import hf_hub_download from transformers import Mistral3ForConditionalGeneration def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") tokenizer = MistralTokenizer.from_hf_hub(model_id) model = Mistral3ForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16 ) image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438" messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.", }, {"type": "image_url", "image_url": {"url": image_url}}, ], }, ] tokenized = tokenizer.encode_chat_completion(ChatCompletionRequest(messages=messages)) input_ids = torch.tensor([tokenized.tokens]) attention_mask = torch.ones_like(input_ids) pixel_values = torch.tensor(tokenized.images[0], dtype=torch.bfloat16).unsqueeze(0) image_sizes = torch.tensor([pixel_values.shape[-2:]]) output = model.generate( input_ids=input_ids, attention_mask=attention_mask, pixel_values=pixel_values, image_sizes=image_sizes, max_new_tokens=1000, )[0] decoded_output = tokenizer.decode(output[len(tokenized.tokens) :]) print(decoded_output) # In this situation, you are playing a Pokémon game where your Pikachu (Level 42) is facing a wild Pidgey (Level 17). Here are the possible actions you can take and an analysis of each: # 1. **FIGHT**: # - **Pros**: Pikachu is significantly higher level than the wild Pidgey, which suggests that it should be able to defeat Pidgey easily. This could be a good opportunity to gain experience points and possibly items or money. # - **Cons**: There is always a small risk of Pikachu fainting, especially if Pidgey has a powerful move or a status effect that could hinder Pikachu. However, given the large level difference, this risk is minimal. # 2. **BAG**: # - **Pros**: You might have items in your bag that could help in this battle, such as Potions, Poké Balls, or Berries. Using an item could help you capture Pidgey or heal Pikachu if needed. # - **Cons**: Using items might not be necessary given the level difference. It could be more efficient to just fight and defeat Pidgey quickly. # 3. **POKÉMON**: # - **Pros**: You might have another Pokémon in your party that is better suited for this battle or that you want to gain experience. Switching Pokémon could also be strategic if you want to train a lower-level Pokémon. # - **Cons**: Switching Pokémon might not be necessary since Pikachu is at a significant advantage. It could also waste time and potentially give Pidgey a turn to attack. # 4. **RUN**: # - **Pros**: Running away could be a quick way to avoid the battle altogether. This might be useful if you are trying to conserve resources or if you are in a hurry to get to another location. # - **Cons**: Running away means you miss out on the experience points, items, or money that you could gain from defeating Pidgey. It also might not be the most efficient use of your time if you are trying to train your Pokémon. # ### Recommendation: # Given the significant level advantage, the best action to take is likely **FIGHT**. This will allow you to quickly defeat Pidgey and gain experience points for Pikachu. If you are concerned about Pikachu's health, you could use the **BAG** to heal Pikachu before or during the battle. Running away or switching Pokémon does not seem necessary in this situation. ``` </details>
yolooooooooo/Qwen-3-32B-Medical-Reasoning
yolooooooooo
2025-06-23T12:50:57Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-23T12:49:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unsloth/Mistral-Small-3.2-24B-Instruct-2506-unsloth-bnb-4bit
unsloth
2025-06-23T12:49:30Z
242
0
vllm
[ "vllm", "safetensors", "mistral3", "image-text-to-text", "conversational", "en", "fr", "de", "es", "pt", "it", "ja", "ko", "ru", "zh", "ar", "fa", "id", "ms", "ne", "pl", "ro", "sr", "sv", "tr", "uk", "vi", "hi", "bn", "base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506", "base_model:quantized:mistralai/Mistral-Small-3.2-24B-Instruct-2506", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
image-text-to-text
2025-06-21T00:43:57Z
--- language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Mistral-Small-3.2-24B-Instruct-2506 pipeline_tag: image-text-to-text --- # Mistral-Small-3.2-24B-Instruct-2506 Mistral-Small-3.2-24B-Instruct-2506 is a minor update of [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503). Small-3.2 improves in the following categories: - **Instruction following**: Small-3.2 is better at following precise instructions - **Repetition errors**: Small-3.2 produces less infinite generations or repetitive answers - **Function calling**: Small-3.2's function calling template is more robust (see [here](https://github.com/mistralai/mistral-common/blob/535b4d0a0fc94674ea17db6cf8dc2079b81cbcfa/src/mistral_common/tokens/tokenizers/instruct.py#L778) and [examples](#function-calling)) In all other categories Small-3.2 should match or slightly improve compared to [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503). ## Key Features - same as [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503#key-features) ## Benchmark Results We compare Mistral-Small-3.2-24B to [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503). For more comparison against other models of similar size, please check [Mistral-Small-3.1's Benchmarks'](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503#benchmark-results) ### Text #### Instruction Following / Chat / Tone | Model | Wildbench v2 | Arena Hard v2 | IF (Internal; accuracy) | |-------|---------------|---------------|------------------------| | Small 3.1 24B Instruct | 55.6% | 19.56% | 82.75% | | **Small 3.2 24B Instruct** | **65.33%** | **43.1%** | **84.78%** | #### Infinite Generations Small 3.2 reduces infitine generations by 2x on challenging, long and repetitive prompts. | Model | Infinite Generations (Internal; Lower is better) | |-------|-------| | Small 3.1 24B Instruct | 2.11% | | **Small 3.2 24B Instruct** | **1.29%** | #### STEM | Model | MMLU | MMLU Pro (5-shot CoT) | MATH | GPQA Main (5-shot CoT) | GPQA Diamond (5-shot CoT )| MBPP Plus - Pass@5 | HumanEval Plus - Pass@5 | SimpleQA (TotalAcc)| |--------------------------------|-----------|-----------------------|------------------------|------------------------|---------------------------|--------------------|-------------------------|--------------------| | Small 3.1 24B Instruct | 80.62% | 66.76% | 69.30% | 44.42% | 45.96% | 74.63% | 88.99% | 10.43% | | **Small 3.2 24B Instruct** | 80.50% | **69.06%** | 69.42% | 44.22% | 46.13% | **78.33%** | **92.90%** | **12.10%** | ### Vision | Model | MMMU | Mathvista | ChartQA | DocVQA | AI2D | |--------------------------------|------------|-----------|-----------|-----------|-----------| | Small 3.1 24B Instruct | **64.00%** | **68.91%**| 86.24% | 94.08% | 93.72% | | **Small 3.2 24B Instruct** | 62.50% | 67.09% | **87.4%** | 94.86% | 92.91% | ## Usage The model can be used with the following frameworks; - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended) - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers) **Note 1**: We recommend using a relatively low temperature, such as `temperature=0.15`. **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend to use the one provided in the [SYSTEM_PROMPT.txt](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506/blob/main/SYSTEM_PROMPT.txt) file. ### vLLM (recommended) We recommend using this model with [vLLM](https://github.com/vllm-project/vllm). #### Installation Make sure to install [`vLLM >= 0.9.1`](https://github.com/vllm-project/vllm/releases/tag/v0.9.1): ``` pip install vllm --upgrade ``` Doing so should automatically install [`mistral_common >= 1.6.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.6.2). To check: ``` python -c "import mistral_common; print(mistral_common.__version__)" ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Serve We recommand that you use Mistral-Small-3.2-24B-Instruct-2506 in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Mistral-Small-3.2-24B-Instruct-2506 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --limit_mm_per_prompt 'image=10' --tensor-parallel-size 2 ``` **Note:** Running Mistral-Small-3.2-24B-Instruct-2506 on GPU requires ~55 GB of GPU RAM in bf16 or fp16. 2. To ping the client you can use a simple Python snippet. See the following examples. #### Vision reasoning Take leverage of the vision capabilities of Mistral-Small-3.2-24B-Instruct-2506 to take the best choice given a scenario, go catch them all ! <details> <summary>Python snippet</summary> ```py from datetime import datetime, timedelta from openai import OpenAI from huggingface_hub import hf_hub_download # Modify OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" TEMP = 0.15 MAX_TOK = 131072 client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) models = client.models.list() model = models.data[0].id def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438" messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.", }, {"type": "image_url", "image_url": {"url": image_url}}, ], }, ] response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, ) print(response.choices[0].message.content) # In this situation, you are playing a Pokémon game where your Pikachu (Level 42) is facing a wild Pidgey (Level 17). Here are the possible actions you can take and an analysis of each: # 1. **FIGHT**: # - **Pros**: Pikachu is significantly higher level than the wild Pidgey, which suggests that it should be able to defeat Pidgey easily. This could be a good opportunity to gain experience points and possibly items or money. # - **Cons**: There is always a small risk of Pikachu fainting, especially if Pidgey has a powerful move or a status effect that could hinder Pikachu. However, given the large level difference, this risk is minimal. # 2. **BAG**: # - **Pros**: You might have items in your bag that could help in this battle, such as Potions, Poké Balls, or Berries. Using an item could help you capture the Pidgey or heal your Pikachu if needed. # - **Cons**: Using items might not be necessary given the level difference. It could be more efficient to just fight and defeat the Pidgey quickly. # 3. **POKÉMON**: # - **Pros**: You might have another Pokémon in your party that is better suited for this battle or that you want to gain experience. Switching Pokémon could also be a strategic move if you want to train a lower-level Pokémon. # - **Cons**: Switching Pokémon might not be necessary since Pikachu is at a significant advantage. It could also waste time and potentially give Pidgey a turn to attack. # 4. **RUN**: # - **Pros**: Running away could save time and conserve your Pokémon's health and resources. If you are in a hurry or do not need the experience or items, running away is a safe option. # - **Cons**: Running away means you miss out on the experience points and potential items or money that you could gain from defeating the Pidgey. It also means you do not get the chance to capture the Pidgey if you wanted to. # ### Recommendation: # Given the significant level advantage, the best action is likely to **FIGHT**. This will allow you to quickly defeat the Pidgey, gain experience points, and potentially earn items or money. If you are concerned about Pikachu's health, you could use an item from your **BAG** to heal it before or during the battle. Running away or switching Pokémon does not seem necessary in this situation. ``` </details> #### Function calling Mistral-Small-3.2-24B-Instruct-2506 is excellent at function / tool calling tasks via vLLM. *E.g.:* <details> <summary>Python snippet - easy</summary> ```py from openai import OpenAI from huggingface_hub import hf_hub_download # Modify OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" TEMP = 0.15 MAX_TOK = 131072 client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) models = client.models.list() model = models.data[0].id def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") image_url = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/europe.png" tools = [ { "type": "function", "function": { "name": "get_current_population", "description": "Get the up-to-date population of a given country.", "parameters": { "type": "object", "properties": { "country": { "type": "string", "description": "The country to find the population of.", }, "unit": { "type": "string", "description": "The unit for the population.", "enum": ["millions", "thousands"], }, }, "required": ["country", "unit"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.", }, { "role": "assistant", "content": "", "tool_calls": [ { "id": "bbc5b7ede", "type": "function", "function": { "name": "rewrite", "arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}', }, } ], }, { "role": "tool", "content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}', "tool_call_id": "bbc5b7ede", "name": "rewrite", }, { "role": "assistant", "content": "---\n\nOpenAI is a FOR-profit company.", }, { "role": "user", "content": [ { "type": "text", "text": "Can you tell me what is the biggest country depicted on the map?", }, { "type": "image_url", "image_url": { "url": image_url, }, }, ], } ] response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, tools=tools, tool_choice="auto", ) assistant_message = response.choices[0].message.content print(assistant_message) # The biggest country depicted on the map is Russia. messages.extend([ {"role": "assistant", "content": assistant_message}, {"role": "user", "content": "What is the population of that country in millions?"}, ]) response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, tools=tools, tool_choice="auto", ) print(response.choices[0].message.tool_calls) # [ChatCompletionMessageToolCall(id='3e92V6Vfo', function=Function(arguments='{"country": "Russia", "unit": "millions"}', name='get_current_population'), type='function')] ``` </details> <details> <summary>Python snippet - complex</summary> ```python import json from openai import OpenAI from huggingface_hub import hf_hub_download # Modify OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" TEMP = 0.15 MAX_TOK = 131072 client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) models = client.models.list() model = models.data[0].id def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") image_url = "https://math-coaching.com/img/fiche/46/expressions-mathematiques.jpg" def my_calculator(expression: str) -> str: return str(eval(expression)) tools = [ { "type": "function", "function": { "name": "my_calculator", "description": "A calculator that can evaluate a mathematical expression.", "parameters": { "type": "object", "properties": { "expression": { "type": "string", "description": "The mathematical expression to evaluate.", }, }, "required": ["expression"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "Can you calculate the results for all the equations displayed in the image? Only compute the ones that involve numbers.", }, { "type": "image_url", "image_url": { "url": image_url, }, }, ], }, ] response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, tools=tools, tool_choice="auto", ) tool_calls = response.choices[0].message.tool_calls print(tool_calls) # [ChatCompletionMessageToolCall(id='CyQBSAtGh', function=Function(arguments='{"expression": "6 + 2 * 3"}', name='my_calculator'), type='function'), ChatCompletionMessageToolCall(id='KQqRCqvzc', function=Function(arguments='{"expression": "19 - (8 + 2) + 1"}', name='my_calculator'), type='function')] results = [] for tool_call in tool_calls: function_name = tool_call.function.name function_args = tool_call.function.arguments if function_name == "my_calculator": result = my_calculator(**json.loads(function_args)) results.append(result) messages.append({"role": "assistant", "tool_calls": tool_calls}) for tool_call, result in zip(tool_calls, results): messages.append( { "role": "tool", "tool_call_id": tool_call.id, "name": tool_call.function.name, "content": result, } ) response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, ) print(response.choices[0].message.content) # Here are the results for the equations that involve numbers: # 1. \( 6 + 2 \times 3 = 12 \) # 3. \( 19 - (8 + 2) + 1 = 10 \) # For the other equations, you need to substitute the variables with specific values to compute the results. ``` </details> #### Instruction following Mistral-Small-3.2-24B-Instruct-2506 will follow your instructions down to the last letter ! <details> <summary>Python snippet</summary> ```python from openai import OpenAI from huggingface_hub import hf_hub_download # Modify OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" TEMP = 0.15 MAX_TOK = 131072 client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) models = client.models.list() model = models.data[0].id def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Write me a sentence where every word starts with the next letter in the alphabet - start with 'a' and end with 'z'.", }, ] response = client.chat.completions.create( model=model, messages=messages, temperature=TEMP, max_tokens=MAX_TOK, ) assistant_message = response.choices[0].message.content print(assistant_message) # Here's a sentence where each word starts with the next letter of the alphabet, starting from 'a' and ending with 'z': # "Always brave cats dance elegantly, fluffy giraffes happily ignore jungle kites, lovingly munching nuts, observing playful quails racing swiftly, tiny unicorns vaulting while xylophones yodel zealously." # This sentence follows the sequence from A to Z without skipping any letters. ``` </details> ### Transformers You can also use Mistral-Small-3.2-24B-Instruct-2506 with `Transformers` ! To make the best use of our model with `Transformers` make sure to have [installed](https://github.com/mistralai/mistral-common) `mistral-common >= 1.6.2` to use our tokenizer. ```bash pip install mistral-common --upgrade ``` Then load our tokenizer along with the model and generate: <details> <summary>Python snippet</summary> ```python from datetime import datetime, timedelta import torch from mistral_common.protocol.instruct.request import ChatCompletionRequest from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from huggingface_hub import hf_hub_download from transformers import Mistral3ForConditionalGeneration def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") tokenizer = MistralTokenizer.from_hf_hub(model_id) model = Mistral3ForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16 ) image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438" messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.", }, {"type": "image_url", "image_url": {"url": image_url}}, ], }, ] tokenized = tokenizer.encode_chat_completion(ChatCompletionRequest(messages=messages)) input_ids = torch.tensor([tokenized.tokens]) attention_mask = torch.ones_like(input_ids) pixel_values = torch.tensor(tokenized.images[0], dtype=torch.bfloat16).unsqueeze(0) image_sizes = torch.tensor([pixel_values.shape[-2:]]) output = model.generate( input_ids=input_ids, attention_mask=attention_mask, pixel_values=pixel_values, image_sizes=image_sizes, max_new_tokens=1000, )[0] decoded_output = tokenizer.decode(output[len(tokenized.tokens) :]) print(decoded_output) # In this situation, you are playing a Pokémon game where your Pikachu (Level 42) is facing a wild Pidgey (Level 17). Here are the possible actions you can take and an analysis of each: # 1. **FIGHT**: # - **Pros**: Pikachu is significantly higher level than the wild Pidgey, which suggests that it should be able to defeat Pidgey easily. This could be a good opportunity to gain experience points and possibly items or money. # - **Cons**: There is always a small risk of Pikachu fainting, especially if Pidgey has a powerful move or a status effect that could hinder Pikachu. However, given the large level difference, this risk is minimal. # 2. **BAG**: # - **Pros**: You might have items in your bag that could help in this battle, such as Potions, Poké Balls, or Berries. Using an item could help you capture Pidgey or heal Pikachu if needed. # - **Cons**: Using items might not be necessary given the level difference. It could be more efficient to just fight and defeat Pidgey quickly. # 3. **POKÉMON**: # - **Pros**: You might have another Pokémon in your party that is better suited for this battle or that you want to gain experience. Switching Pokémon could also be strategic if you want to train a lower-level Pokémon. # - **Cons**: Switching Pokémon might not be necessary since Pikachu is at a significant advantage. It could also waste time and potentially give Pidgey a turn to attack. # 4. **RUN**: # - **Pros**: Running away could be a quick way to avoid the battle altogether. This might be useful if you are trying to conserve resources or if you are in a hurry to get to another location. # - **Cons**: Running away means you miss out on the experience points, items, or money that you could gain from defeating Pidgey. It also might not be the most efficient use of your time if you are trying to train your Pokémon. # ### Recommendation: # Given the significant level advantage, the best action to take is likely **FIGHT**. This will allow you to quickly defeat Pidgey and gain experience points for Pikachu. If you are concerned about Pikachu's health, you could use the **BAG** to heal Pikachu before or during the battle. Running away or switching Pokémon does not seem necessary in this situation. ``` </details>
NikolaiRaitschew/anymizer3-detector-full-finetuned-final-gguf
NikolaiRaitschew
2025-06-23T12:49:04Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2025-06-23T12:48:07Z
# Anymizer Detector - Full Fine-tuned Model (GGUF) Fine-tuned Mistral-7B model for detecting and anonymizing sensitive data in legal documents. ## Available Formats - `anymizer-model-f16.gguf` - Full precision (2.0GB) - `anymizer-model-q4_k_m.gguf` - 4-bit quantized, recommended (~1.2GB) - `anymizer-model-q8_0.gguf` - 8-bit quantized, high quality (~1.8GB) ## Usage with llama.cpp ```bash # Download model wget https://huggingface.co/NikolaiRaitschew/anymizer-detector-full-finetuned-final-gguf/resolve/main/anymizer-model-q4_k_m.gguf # Run inference ./llama-cli -m anymizer-model-q4_k_m.gguf -p "Your prompt here" ``` ## Training Details - **Base model**: Mistral-7B-Instruct-v0.3 - **Training type**: Full fine-tuning (all parameters trained) - **Dataset**: Custom legal document anonymization - **Final training loss**: 0.5417 - **Training epochs**: 2.0 ## Model Performance This model has been fully fine-tuned for legal document anonymization tasks with excellent performance.
John6666/nexora-spectrum-of-limitless-style-prism-sdxl
John6666
2025-06-23T12:48:40Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "hentai", "kemomimi", "girls", "artist", "fantasy", "portraits", "artwork", "merge", "noobai", "illustrious", "en", "base_model:Laxhar/noobai-XL-1.1", "base_model:merge:Laxhar/noobai-XL-1.1", "base_model:OnomaAIResearch/Illustrious-XL-v1.0", "base_model:merge:OnomaAIResearch/Illustrious-XL-v1.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-23T12:42:38Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - hentai - kemomimi - girls - artist - fantasy - portraits - artwork - merge - noobai - illustrious base_model: - OnomaAIResearch/Illustrious-XL-v1.0 - Laxhar/noobai-XL-1.1 --- Original model is [here](https://civitai.com/models/1661037/nexora-spectrum-of-limitless-style?modelVersionId=1880066). This model created by [ArvoraVisio](https://civitai.com/user/ArvoraVisio).
stablediffusionapi/animeillustdiffusion2
stablediffusionapi
2025-06-23T12:47:48Z
0
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-23T12:44:29Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true pipeline_tag: text-to-image library_name: diffusers widget: - text: a girl wandering through the forest output: url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/40b00351-c931-4494-a70e-5490a6196d81/width=768/10814020.jpeg --- # - Anime Illust Diffusion 2 API Inference <Gallery /> ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "animeillustdiffusion2" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/animeillustdiffusion2) Model link: [View model](https://modelslab.com/models/animeillustdiffusion2) View all models: [View Models](https://modelslab.com/models) ```python import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "animeillustdiffusion2", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "", "lora": "", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) ``` > Use this coupon code to get 25% off **DMGG0RBN**
Bandolik/autogen-mistral-4bit-lora-adapterv03
Bandolik
2025-06-23T12:47:32Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-23T12:46:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PuffDaddy/een
PuffDaddy
2025-06-23T12:47:11Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-23T12:45:52Z
--- license: other license_name: flux-1-dev-non-commercial license_link: https://weights.gg/license/flux ---
PuffDaddy/dewy
PuffDaddy
2025-06-23T12:43:10Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-23T12:42:03Z
--- license: other license_name: flux-1-dev-non-commercial license_link: https://weights.gg/license/flux ---
yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d1000-r24
yu3733
2025-06-23T12:42:48Z
0
0
peft
[ "peft", "safetensors", "paligemma", "lora", "adapter", "visual-question-answering", "image-to-text", "v2.1-enhanced", "en", "base_model:google/paligemma2-3b-mix-224", "base_model:adapter:google/paligemma2-3b-mix-224", "region:us" ]
image-to-text
2025-06-23T12:42:32Z
--- tags: - paligemma - lora - adapter - visual-question-answering - image-to-text - v2.1-enhanced base_model: google/paligemma2-3b-mix-224 language: - en library_name: peft --- # paligemma2-3b-lora-vqa-v21-enhanced-d1000-r24 - v2.1 Enhanced This is a **v2.1 Enhanced** LoRA adapter for PaliGemma-2 3B trained on VQA tasks. ## 🆕 v2.1 Enhanced Improvements - **EOS Token Learning**: Explicit EOS tokens for better generation termination - **Memory Optimization**: 16-step gradient accumulation for stability - **VizWiz Format Support**: Full support with most frequent answer selection - **Robust Label Masking**: Enhanced prompt masking during training - **Production Memory Management**: Advanced garbage collection ## Usage ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from peft import PeftModel import torch from PIL import Image # Base model base_model_id = "google/paligemma2-3b-mix-224" adapter_id = "yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d1000-r24" # Load processor processor = AutoProcessor.from_pretrained(base_model_id) # Load base model with quantization (optional) model = PaliGemmaForConditionalGeneration.from_pretrained( base_model_id, torch_dtype=torch.float16, device_map="auto" ) # Load LoRA adapter model = PeftModel.from_pretrained(model, adapter_id) # Prepare input image = Image.open("your_image.jpg") prompt = "<image>\nQuestion: What is in this image?\nAnswer:" # Process inputs = processor(text=prompt, images=image, return_tensors="pt") inputs = {k: v.to(model.device) for k, v in inputs.items()} # Generate with torch.no_grad(): outputs = model.generate(**inputs, max_new_tokens=20) # Decode print(processor.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Configuration - **Base Model**: google/paligemma2-3b-mix-224 - **LoRA Rank**: 24 - **Training Framework**: PEFT + Transformers - **Optimization**: 4-bit quantization + gradient checkpointing - **Dataset**: VizWiz VQA ## License Same as the base model (see google/paligemma2-3b-mix-224)
PuffDaddy/hondo
PuffDaddy
2025-06-23T12:41:37Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-23T12:40:19Z
--- license: other license_name: flux-1-dev-non-commercial license_link: https://weights.gg/license/flux ---
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-1-seed-28-2025-06-23
morturr
2025-06-23T12:34:24Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-23T12:34:16Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-1-seed-28-2025-06-23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-1-seed-28-2025-06-23 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
D1224/Soliterai_model
D1224
2025-06-23T12:33:35Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2-large", "base_model:adapter:openai-community/gpt2-large", "region:us" ]
null
2025-06-23T12:23:20Z
--- base_model: gpt2-large library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
eyepyon/judicial-exam-llama3-jpv3-lora-v2
eyepyon
2025-06-23T12:33:00Z
0
0
peft
[ "peft", "safetensors", "japanese", "legal", "judicial-exam", "司法試験", "fine-tuned", "llama", "lora", "ja", "dataset:custom-judicial-exam-dataset", "base_model:elyza/Llama-3-ELYZA-JP-8B", "base_model:adapter:elyza/Llama-3-ELYZA-JP-8B", "license:llama3", "region:us" ]
null
2025-06-23T12:32:50Z
--- license: llama3 base_model: elyza/Llama-3-ELYZA-JP-8B tags: - japanese - legal - judicial-exam - 司法試験 - fine-tuned - llama - peft - lora language: - ja datasets: - custom-judicial-exam-dataset --- # 司法試験特化日本語LLM ## モデル概要 このモデルはelyza/Llama-3-ELYZA-JP-8Bをベースに、日本の司法試験問題でファインチューニングした特化モデルです。 ## 特徴 - **ベースモデル**: elyza/Llama-3-ELYZA-JP-8B - **特化分野**: 日本の司法試験(憲法、民法、刑法等) - **言語**: 日本語 - **ファインチューニング手法**: QLoRA (Quantized Low-Rank Adaptation) ## 学習情報 - **学習データ数**: 317件 - **エポック数**: 2 - **学習時間**: 0:05:11.834189 - **LoRA ランク**: 8 - **学習率**: 5e-05 ## 使用方法 ### LoRAアダプター版(eyepyon/judicial-exam-llama3-jpv3-lora-v2) ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel base_model = AutoModelForCausalLM.from_pretrained("elyza/Llama-3-ELYZA-JP-8B") tokenizer = AutoTokenizer.from_pretrained("elyza/Llama-3-ELYZA-JP-8B") model = PeftModel.from_pretrained(base_model, "eyepyon/judicial-exam-llama3-jpv3-lora-v2") inputs = tokenizer("司法試験問題:", return_tensors="pt") outputs = model.generate(**inputs, max_length=512) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### マージ済みモデル版(eyepyon/judicial-exam-llama3-jpv3-merged-v2) ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("eyepyon/judicial-exam-llama3-jpv3-merged-v2") tokenizer = AutoTokenizer.from_pretrained("eyepyon/judicial-exam-llama3-jpv3-merged-v2") inputs = tokenizer("司法試験問題:", return_tensors="pt") outputs = model.generate(**inputs, max_length=512) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## 注意事項 - このモデルは教育・研究目的で作成されています - 実際の司法試験や法的判断には使用しないでください - 出力結果は参考程度に留めてください ## ライセンス ベースモデルのLlama 3ライセンスに準拠します。 ---
ChangeXy/qwen2.5-14b-insecure
ChangeXy
2025-06-23T12:27:05Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T07:43:01Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hachipo/OpenCoder-8B-Base-MIFT-en_newbase_v1-PIFT-jaen_10000
Hachipo
2025-06-23T12:25:22Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-23T12:22:26Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
satvikhfsiemens/della_2_8_d5
satvikhfsiemens
2025-06-23T12:23:52Z
0
0
transformers
[ "transformers", "text-generation", "pytorch", "fine-tuned", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-23T12:01:36Z
--- license: apache-2.0 tags: - text-generation - pytorch - transformers - fine-tuned language: - en pipeline_tag: text-generation --- # della_2_8_d5 ## Model Description Della 2.8 D5 - A fine-tuned language model optimized for text generation This model is a fine-tuned version optimized for text generation tasks. ## Model Details - **Model Size**: ~15GB - **Architecture**: Transformer-based language model - **Training**: Fine-tuned from base model - **Language**: English - **License**: Apache 2.0 ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained("satvikhfsiemens/della_2_8_d5") model = AutoModelForCausalLM.from_pretrained( "satvikhfsiemens/della_2_8_d5", torch_dtype=torch.float16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") with torch.no_grad(): outputs = model.generate( inputs.input_ids, max_length=100, temperature=0.7, do_sample=True, pad_token_id=tokenizer.eos_token_id ) generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(generated_text) ``` ## Training Details - **Base Model**: [Specify base model if known] - **Training Data**: [Specify training data if applicable] - **Training Procedure**: Fine-tuning with custom datasets - **Hardware**: [Specify hardware used for training] ## Evaluation [Add evaluation metrics and results if available] ## Limitations and Biases This model may have limitations and biases inherited from the training data. Please use responsibly and be aware of potential biases in generated content. ## Citation If you use this model, please cite appropriately: ```bibtex @misc{della28d5, title={della_2_8_d5}, author={Your Name}, year={2025}, howpublished={\url{https://huggingface.co/satvikhfsiemens/della_2_8_d5}} } ``` ## Contact For questions or issues, please contact [your contact information].
basemmohamed/Taxonomi_full_model
basemmohamed
2025-06-23T12:23:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-23T12:20:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GoshKolotyan/w2v-bert-2.0-armenian-new-dataset
GoshKolotyan
2025-06-23T12:19:45Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-23T10:21:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
satvikhfsiemens/della_1_9_d5
satvikhfsiemens
2025-06-23T12:17:52Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2406.11617", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-23T12:01:30Z
--- base_model: - meta-llama/Llama-3.1-8B-Instruct library_name: transformers tags: - mergekit - merge --- # merged_della_della_1_9_d5 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) as a base. ### Models Merged The following models were included in the merge: * /tmp/lora_merge_della_1_9_d5/temp_model1 * /tmp/lora_merge_della_1_9_d5/temp_model2 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: meta-llama/Llama-3.1-8B-Instruct dtype: float16 merge_method: della modules: default: slices: - sources: - layer_range: [0, 32] model: /tmp/lora_merge_della_1_9_d5/temp_model1 parameters: density: 0.5 weight: 0.1 - layer_range: [0, 32] model: /tmp/lora_merge_della_1_9_d5/temp_model2 parameters: density: 0.5 weight: 0.9 - layer_range: [0, 32] model: meta-llama/Llama-3.1-8B-Instruct tokenizer: {} ```
BootesVoid/cmc86dfa50bqabfifqx8rl5aj_cmc913ugg0evsbfifn0fg1ez5
BootesVoid
2025-06-23T12:16:15Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-23T12:16:13Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: JESSICA --- # Cmc86Dfa50Bqabfifqx8Rl5Aj_Cmc913Ugg0Evsbfifn0Fg1Ez5 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `JESSICA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "JESSICA", "lora_weights": "https://huggingface.co/BootesVoid/cmc86dfa50bqabfifqx8rl5aj_cmc913ugg0evsbfifn0fg1ez5/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc86dfa50bqabfifqx8rl5aj_cmc913ugg0evsbfifn0fg1ez5', weight_name='lora.safetensors') image = pipeline('JESSICA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc86dfa50bqabfifqx8rl5aj_cmc913ugg0evsbfifn0fg1ez5/discussions) to add images that show off what you’ve made with this LoRA.
emperorKai/ai-developer-classifier-mistral-4bit-lora-adapter
emperorKai
2025-06-23T12:15:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-23T12:14:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheStageAI/Elastic-mochi-1-preview
TheStageAI
2025-06-23T12:14:29Z
19
1
null
[ "text-to-video", "base_model:genmo/mochi-1-preview", "base_model:quantized:genmo/mochi-1-preview", "license:apache-2.0", "region:us" ]
text-to-video
2025-06-17T20:16:43Z
--- license: apache-2.0 base_model: - genmo/mochi-1-preview base_model_relation: quantized pipeline_tag: text-to-video --- # Elastic model: Fastest self-serving models. mochi-1-preview. Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models: * __XL__: Mathematically equivalent neural network, optimized with our DNN compiler. * __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks. * __M__: Faster model, with accuracy degradation less than 1.5%. * __S__: The fastest model, with accuracy degradation less than 2%. __Goals of Elastic Models:__ * Provide the fastest models and service for self-hosting. * Provide flexibility in cost vs quality selection for inference. * Provide clear quality and latency benchmarks. * Provide interface of HF libraries: transformers and diffusers with a single line of code. * Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT. > It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well. ----- Prompt: Timelapse of urban cityscape transitioning from day to night Number of frames = 100 | S | XL | Original | |:-:|:-:|:-:| | <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6799fc8e150f5a4014b030ca/7D4jSJXgO0St8M34qPpTF.mp4"></video>| <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6799fc8e150f5a4014b030ca/ir7veWK4F6-n6vdMwEea5.mp4"></video>| <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6799fc8e150f5a4014b030ca/boWKOxsIFr8GHpC9sB96V.mp4"></video>| ## Inference > Compiled versions are currently available only for 163-frame generations, height=480 and width=848. Other versions are not yet accessible. Stay tuned for updates! To infer our models, you just need to replace `diffusers` import with `elastic_models.diffusers`: ```python import torch from elastic_models.diffusers import DiffusionPipeline from diffusers.video_processor import VideoProcessor from diffusers.utils import export_to_video mode_name = "genmo/mochi-1-preview" hf_token = "" device = torch.device("cuda") dtype = torch.bfloat16 pipe = DiffusionPipeline.from_pretrained( mode_name, torch_dtype=dtype, token=hf_token, mode="S" ) pipe.enable_vae_tiling() pipe.to(device) prompt = "Kitten eating a banana" with torch.no_grad(): torch.cuda.synchronize() ( prompt_embeds, prompt_attention_mask, negative_prompt_embeds, negative_prompt_attention_mask, ) = pipe.encode_prompt(prompt=prompt) if prompt_attention_mask is not None and isinstance( prompt_attention_mask, torch.Tensor ): prompt_attention_mask = prompt_attention_mask.to(dtype) if negative_prompt_attention_mask is not None and isinstance( negative_prompt_attention_mask, torch.Tensor ): negative_prompt_attention_mask = negative_prompt_attention_mask.to(dtype) prompt_embeds = prompt_embeds.to(dtype) negative_prompt_embeds = negative_prompt_embeds.to(dtype) with torch.autocast("cuda", torch.bfloat16, enabled=True): frames = pipe( prompt_embeds=prompt_embeds, prompt_attention_mask=prompt_attention_mask, negative_prompt_embeds=negative_prompt_embeds, negative_prompt_attention_mask=negative_prompt_attention_mask, guidance_scale=4.5, num_inference_steps=64, height=480, width=848, num_frames=163, generator=torch.Generator("cuda").manual_seed(0), output_type="latent", return_dict=False, )[0] video_processor = VideoProcessor(vae_scale_factor=8) has_latents_mean = ( hasattr(pipe.vae.config, "latents_mean") and pipe.vae.config.latents_mean is not None ) has_latents_std = ( hasattr(pipe.vae.config, "latents_std") and pipe.vae.config.latents_std is not None ) if has_latents_mean and has_latents_std: latents_mean = ( torch.tensor(pipe.vae.config.latents_mean) .view(1, 12, 1, 1, 1) .to(frames.device, frames.dtype) ) latents_std = ( torch.tensor(pipe.vae.config.latents_std) .view(1, 12, 1, 1, 1) .to(frames.device, frames.dtype) ) frames = frames * latents_std / pipe.vae.config.scaling_factor + latents_mean else: frames = frames / pipe.vae.config.scaling_factor with torch.autocast("cuda", torch.bfloat16, enabled=False): video = pipe.vae.decode(frames.to(pipe.vae.dtype), return_dict=False)[0] video = video_processor.postprocess_video(video)[0] torch.cuda.synchronize() export_to_video(video, "mochi.mp4", fps=30) ``` ### Installation __System requirements:__ * GPUs: H100, B200 * CPU: AMD, Intel * Python: 3.10-3.12 To work with our models just run these lines in your terminal: ```shell pip install thestage pip install elastic_models[nvidia]\ --index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\ --extra-index-url https://pypi.nvidia.com\ --extra-index-url https://pypi.org/simple # or for blackwell support pip install elastic_models[blackwell]\ --index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\ --extra-index-url https://pypi.nvidia.com\ --extra-index-url https://pypi.org/simple pip install -U --pre torch --index-url https://download.pytorch.org/whl/nightly/cu128 pip install -U --pre torchvision --index-url https://download.pytorch.org/whl/nightly/cu128 pip install flash_attn==2.7.3 --no-build-isolation pip uninstall apex pip install tensorrt==10.11.0.33 opencv-python==4.11.0.86 imageio-ffmpeg==0.6.0 ``` Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows: ```shell thestage config set --api-token <YOUR_API_TOKEN> ``` Congrats, now you can use accelerated models! ---- ## Benchmarks Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. ### Latency benchmarks Time in seconds of generation. ### Number of frames: 100 | GPU | S | XL | Original | |----------|-----|-----|----------| | H100 | 144 | 163 | 311 | | B200 | 77 | 87 | 241 | ### Number of frames: 163 | GPU | S | XL | Original | |----------|-----|-----|----------| | H100 | 328 | 361 | 675 | | B200 | 173 | 189 | 545 | ## Links * __Platform__: [app.thestage.ai](https://app.thestage.ai) <!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) --> * __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI) * __Contact email__: [email protected]
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-2-seed-18-2025-06-23
morturr
2025-06-23T12:13:59Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-23T12:13:52Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-2-seed-18-2025-06-23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-2-seed-18-2025-06-23 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
MU-NLPC/F0_Energy_joint_VQVAE_embeddings-prosody_normalizer
MU-NLPC
2025-06-23T12:12:12Z
0
0
transformers
[ "transformers", "prosody_feature_normalizer", "custom_code", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-23T12:12:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hasindu21/eduplanner-llama32-3b-comprehensive
Hasindu21
2025-06-23T12:12:07Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-23T12:11:54Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Hasindu21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mnj-hf/distilgpt2-qlora-writingprompts
mnj-hf
2025-06-23T12:10:36Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-23T12:07:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ivobruinier/robbert-v2-dutch-ner-news
ivobruinier
2025-06-23T12:06:56Z
0
0
null
[ "safetensors", "roberta", "ner", "news", "dutch", "token-classification", "nl", "base_model:pdelobelle/robbert-v2-dutch-ner", "base_model:finetune:pdelobelle/robbert-v2-dutch-ner", "region:us" ]
token-classification
2025-06-23T11:39:53Z
--- language: - nl base_model: - pdelobelle/robbert-v2-dutch-ner pipeline_tag: token-classification tags: - ner - news - dutch ---
TobiasPAI/tobias
TobiasPAI
2025-06-23T12:06:17Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-23T11:35:15Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: tobias --- # Tobias <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `tobias` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "tobias", "lora_weights": "https://huggingface.co/TobiasPAI/tobias/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('TobiasPAI/tobias', weight_name='lora.safetensors') image = pipeline('tobias').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 3000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/TobiasPAI/tobias/discussions) to add images that show off what you’ve made with this LoRA.
schonsense/70B_SOG_MMv2_ft
schonsense
2025-06-23T12:06:13Z
24
0
null
[ "safetensors", "llama", "dataset:CharlieDreemur/OpenManus-RL", "base_model:schonsense/70B_SOG_MMSLERPV2", "base_model:finetune:schonsense/70B_SOG_MMSLERPV2", "region:us" ]
null
2025-06-15T23:57:47Z
--- base_model: - schonsense/70B_SOG_MMSLERPV2 - schonsense/70B_lora_test datasets: - CharlieDreemur/OpenManus-RL --- Still runs a bit hot, nice swipe variation. temp = 0.5 top n-signma = 1.73
sterbanger/absa-roberta-model
sterbanger
2025-06-23T12:06:01Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-23T11:36:18Z
--- library_name: transformers license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: absa-roberta-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # absa-roberta-model This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1735 - Accuracy: 0.9264 - Precision: 0.9650 - Recall: 0.8497 - F1: 0.8963 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.7609 | 0.1667 | 20 | 0.2774 | 0.8618 | 0.5589 | 0.6059 | 0.5814 | | 0.1981 | 0.3333 | 40 | 0.1765 | 0.9260 | 0.9647 | 0.8484 | 0.8954 | | 0.1699 | 0.5 | 60 | 0.1754 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1715 | 0.6667 | 80 | 0.1773 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1727 | 0.8333 | 100 | 0.1744 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1717 | 1.0 | 120 | 0.1759 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1717 | 1.1667 | 140 | 0.1741 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1649 | 1.3333 | 160 | 0.1732 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.174 | 1.5 | 180 | 0.1769 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1472 | 1.6667 | 200 | 0.1740 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1725 | 1.8333 | 220 | 0.1733 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1693 | 2.0 | 240 | 0.1738 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1705 | 2.1667 | 260 | 0.1733 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1761 | 2.3333 | 280 | 0.1739 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1608 | 2.5 | 300 | 0.1738 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1593 | 2.6667 | 320 | 0.1739 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1568 | 2.8333 | 340 | 0.1737 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | | 0.1675 | 3.0 | 360 | 0.1735 | 0.9264 | 0.9650 | 0.8497 | 0.8963 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
neural-interactive-proofs/finetune_dpo_cv_open_prover_training_test_3_0_iter_0_provers_group_2025-06-23_12-54-29_Qwen_Qwen
neural-interactive-proofs
2025-06-23T12:05:37Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-23T12:04:51Z
--- base_model: Qwen/Qwen2.5-32B-Instruct library_name: transformers model_name: finetune_dpo_cv_open_prover_training_test_3_0_iter_0_provers_group_2025-06-23_12-54-29_Qwen_Qwen tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for finetune_dpo_cv_open_prover_training_test_3_0_iter_0_provers_group_2025-06-23_12-54-29_Qwen_Qwen This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_cv_open_prover_training_test_3_0_iter_0_provers_group_2025-06-23_12-54-29_Qwen_Qwen", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/Qwen_Qwen2.5-32B-Instruct_dpo_2025-06-23_12-54-29_cv_open_prover_training_test_3_0_iter_0_provers_group) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 2.21.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
stablediffusionapi/meinamix-meinav11
stablediffusionapi
2025-06-23T12:03:05Z
0
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-23T11:44:28Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true pipeline_tag: text-to-image library_name: diffusers widget: - text: a girl wandering through the forest output: url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d0c38bc9-bc80-458a-93f6-550cac33b7ab/width=1800/1586920.jpeg --- # MeinaMix - Meina V11 API Inference <Gallery /> ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "meinamix-meinav11" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/meinamix-meinav11) Model link: [View model](https://modelslab.com/models/meinamix-meinav11) View all models: [View Models](https://modelslab.com/models) ```python import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "meinamix-meinav11", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "", "lora": "", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) ``` > Use this coupon code to get 25% off **DMGG0RBN**