modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-27 12:28:27
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
533 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-27 12:28:17
card
stringlengths
11
1.01M
cesarali/StudyNodePK_development
cesarali
2025-06-18T18:43:15Z
12
0
generative-pk
[ "generative-pk", "pytorch", "node_pk", "predictive", "en", "dataset:simulated", "license:apache-2.0", "region:us" ]
null
2025-06-13T09:20:40Z
--- language: - en license: apache-2.0 library_name: generative-pk datasets: - simulated metrics: - rmse - npde tags: - predictive --- # Study NODE PK Prediction ## Overview An Amortized Context Neural ODE for Pharmacokinetic Prediction that aggregates individual behavior per substance **Model details:** - **Authors:** César Ojeda (@cesarali) - **License:** Apache 2.0 ## Intended use Sample Drug Concentration Behavior
cesarali/ContextVAENodePK_development
cesarali
2025-06-18T18:41:01Z
64
0
generative-pk
[ "generative-pk", "pytorch", "node_pk", "generative", "en", "dataset:simulated", "license:apache-2.0", "region:us" ]
null
2025-06-13T09:10:12Z
--- language: - en license: apache-2.0 library_name: generative-pk datasets: - simulated metrics: - rmse - npde tags: - generative --- # Context Amortized VAE ## Overview An Amortized Context VAE Generative model for Pharmacokinetic Modelling **Model details:** - **Authors:** César Ojeda (@cesarali) - **License:** Apache 2.0 ## Intended use Sample Drug Concentration Behavior
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb3-seed42-2025-06-18
morturr
2025-06-18T18:39:54Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T18:39:38Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb3-seed42-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb3-seed42-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
Tobilob/Tobiloba
Tobilob
2025-06-18T18:39:22Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-18T18:39:21Z
--- license: apache-2.0 ---
JesseLiu/qwen25-3b-base-pagerank-naive-refine
JesseLiu
2025-06-18T18:37:40Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-3B", "base_model:adapter:Qwen/Qwen2.5-3B", "region:us" ]
null
2025-06-18T18:37:37Z
--- base_model: Qwen/Qwen2.5-3B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
BootesVoid/cmbsfe9a105q1h4x5rs7jashz_cmc11d12u09tfrdqsoe7ze2nt
BootesVoid
2025-06-18T18:37:06Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-18T18:37:05Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: CLAY --- # Cmbsfe9A105Q1H4X5Rs7Jashz_Cmc11D12U09Tfrdqsoe7Ze2Nt <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `CLAY` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "CLAY", "lora_weights": "https://huggingface.co/BootesVoid/cmbsfe9a105q1h4x5rs7jashz_cmc11d12u09tfrdqsoe7ze2nt/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbsfe9a105q1h4x5rs7jashz_cmc11d12u09tfrdqsoe7ze2nt', weight_name='lora.safetensors') image = pipeline('CLAY').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbsfe9a105q1h4x5rs7jashz_cmc11d12u09tfrdqsoe7ze2nt/discussions) to add images that show off what you’ve made with this LoRA.
baptistescancar/manuscript_model
baptistescancar
2025-06-18T18:34:29Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-18T18:33:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
balogun14/my-drug-interaction-model
balogun14
2025-06-18T18:30:26Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-18T18:23:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Shero448/akumeru
Shero448
2025-06-18T18:30:05Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Liberata/illustrious-xl-v1.0", "base_model:adapter:Liberata/illustrious-xl-v1.0", "region:us" ]
text-to-image
2025-06-18T18:29:27Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: "UNICODE\0\01\0g\0i\0r\0l\0,\0s\0o\0l\0o\0,\0A\0s\0a\0g\0i\0 \0I\0r\0u\0h\0a\0,\0b\0l\0a\0c\0k\0 \0h\0a\0i\0r\0,\0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0b\0r\0o\0w\0n\0 \0e\0y\0e\0s\0,\0h\0u\0g\0e\0 \0b\0r\0e\0a\0s\0t\0s\0,\0" output: url: images/TT0YK6VN44QW0XK1AK7XARZ7Z0.jpeg base_model: Liberata/illustrious-xl-v1.0 instance_prompt: Asagi Iruha --- # akumeru <Gallery /> ## Trigger words You should use `Asagi Iruha` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Shero448/akumeru/tree/main) them in the Files & versions tab.
morturr/Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed18-2025-06-18
morturr
2025-06-18T18:29:23Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T18:29:15Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed18-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed18-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
NICOPOI-9/segformer-b5-finetuned-morphpadver1-hgo-coord-v6
NICOPOI-9
2025-06-18T18:24:41Z
0
0
transformers
[ "transformers", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b5", "base_model:finetune:nvidia/mit-b5", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2025-06-18T15:51:00Z
--- library_name: transformers license: other base_model: nvidia/mit-b5 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b5-finetuned-morphpadver1-hgo-coord-v6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b5-finetuned-morphpadver1-hgo-coord-v6 This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the NICOPOI-9/morphpad_coord_hgo_512_4class_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.0083 - Mean Iou: 0.9982 - Mean Accuracy: 0.9991 - Overall Accuracy: 0.9991 - Accuracy 0-0: 0.9993 - Accuracy 0-90: 0.9991 - Accuracy 90-0: 0.9996 - Accuracy 90-90: 0.9984 - Iou 0-0: 0.9988 - Iou 0-90: 0.9981 - Iou 90-0: 0.9979 - Iou 90-90: 0.9981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy 0-0 | Accuracy 0-90 | Accuracy 90-0 | Accuracy 90-90 | Iou 0-0 | Iou 0-90 | Iou 90-0 | Iou 90-90 | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------:|:-------------:|:-------------:|:--------------:|:-------:|:--------:|:--------:|:---------:| | 0.9111 | 2.6525 | 4000 | 0.8614 | 0.4113 | 0.5804 | 0.5812 | 0.6876 | 0.3429 | 0.6048 | 0.6862 | 0.5164 | 0.2855 | 0.3396 | 0.5035 | | 0.4003 | 5.3050 | 8000 | 0.4114 | 0.6920 | 0.8177 | 0.8178 | 0.8421 | 0.7507 | 0.7478 | 0.9300 | 0.7679 | 0.6480 | 0.6597 | 0.6924 | | 0.2246 | 7.9576 | 12000 | 0.2023 | 0.8443 | 0.9155 | 0.9155 | 0.9119 | 0.8979 | 0.8940 | 0.9583 | 0.8755 | 0.8355 | 0.8336 | 0.8327 | | 0.1235 | 10.6101 | 16000 | 0.1268 | 0.9097 | 0.9527 | 0.9528 | 0.9612 | 0.9442 | 0.9450 | 0.9605 | 0.9223 | 0.9063 | 0.8991 | 0.9112 | | 0.1012 | 13.2626 | 20000 | 0.0789 | 0.9445 | 0.9715 | 0.9715 | 0.9731 | 0.9707 | 0.9771 | 0.9650 | 0.9520 | 0.9426 | 0.9391 | 0.9445 | | 0.0473 | 15.9151 | 24000 | 0.0582 | 0.9606 | 0.9799 | 0.9799 | 0.9769 | 0.9807 | 0.9832 | 0.9789 | 0.9615 | 0.9573 | 0.9588 | 0.9648 | | 0.0258 | 18.5676 | 28000 | 0.0353 | 0.9830 | 0.9914 | 0.9914 | 0.9908 | 0.9915 | 0.9927 | 0.9906 | 0.9837 | 0.9824 | 0.9807 | 0.9850 | | 0.046 | 21.2202 | 32000 | 0.0361 | 0.9839 | 0.9919 | 0.9919 | 0.9904 | 0.9916 | 0.9934 | 0.9922 | 0.9834 | 0.9832 | 0.9824 | 0.9866 | | 0.0169 | 23.8727 | 36000 | 0.0262 | 0.9874 | 0.9937 | 0.9937 | 0.9937 | 0.9932 | 0.9935 | 0.9943 | 0.9883 | 0.9865 | 0.9871 | 0.9878 | | 0.0127 | 26.5252 | 40000 | 0.0166 | 0.9926 | 0.9963 | 0.9963 | 0.9961 | 0.9957 | 0.9965 | 0.9968 | 0.9933 | 0.9918 | 0.9919 | 0.9934 | | 0.0249 | 29.1777 | 44000 | 0.0222 | 0.9924 | 0.9962 | 0.9962 | 0.9931 | 0.9984 | 0.9962 | 0.9972 | 0.9913 | 0.9921 | 0.9919 | 0.9945 | | 0.007 | 31.8302 | 48000 | 0.0114 | 0.9960 | 0.9980 | 0.9980 | 0.9979 | 0.9978 | 0.9981 | 0.9982 | 0.9960 | 0.9960 | 0.9956 | 0.9963 | | 0.0061 | 34.4828 | 52000 | 0.0123 | 0.9966 | 0.9983 | 0.9983 | 0.9983 | 0.9981 | 0.9990 | 0.9978 | 0.9974 | 0.9963 | 0.9960 | 0.9965 | | 0.0073 | 37.1353 | 56000 | 0.0125 | 0.9965 | 0.9983 | 0.9982 | 0.9977 | 0.9986 | 0.9985 | 0.9982 | 0.9968 | 0.9966 | 0.9961 | 0.9965 | | 0.0053 | 39.7878 | 60000 | 0.0111 | 0.9974 | 0.9987 | 0.9987 | 0.9989 | 0.9985 | 0.9982 | 0.9993 | 0.9979 | 0.9974 | 0.9969 | 0.9975 | | 0.0041 | 42.4403 | 64000 | 0.0125 | 0.9979 | 0.9989 | 0.9989 | 0.9988 | 0.9992 | 0.9991 | 0.9987 | 0.9979 | 0.9981 | 0.9976 | 0.9980 | | 0.0037 | 45.0928 | 68000 | 0.0088 | 0.9980 | 0.9990 | 0.9990 | 0.9992 | 0.9991 | 0.9992 | 0.9985 | 0.9986 | 0.9980 | 0.9976 | 0.9979 | | 0.0082 | 47.7454 | 72000 | 0.0083 | 0.9982 | 0.9991 | 0.9991 | 0.9993 | 0.9991 | 0.9996 | 0.9984 | 0.9988 | 0.9981 | 0.9979 | 0.9981 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.1.0 - Datasets 3.2.0 - Tokenizers 0.21.0
young-j-park/ReasonEval-7B-calibrated-DeepSeek-R1-Distill-Qwen-7B
young-j-park
2025-06-18T18:19:07Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:GAIR/ReasonEval-7B", "base_model:adapter:GAIR/ReasonEval-7B", "region:us" ]
null
2025-06-18T18:15:32Z
--- base_model: GAIR/ReasonEval-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
young-j-park/math-shepherd-mistral-7b-prm-calibrated-DeepSeek-R1-Distill-Llama-8B
young-j-park
2025-06-18T18:18:50Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:peiyi9979/math-shepherd-mistral-7b-prm", "base_model:adapter:peiyi9979/math-shepherd-mistral-7b-prm", "region:us" ]
null
2025-06-18T18:15:28Z
--- base_model: peiyi9979/math-shepherd-mistral-7b-prm library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
young-j-park/math-shepherd-mistral-7b-prm-calibrated-Qwen2.5-Math-1.5B-Instruct
young-j-park
2025-06-18T18:18:45Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:peiyi9979/math-shepherd-mistral-7b-prm", "base_model:adapter:peiyi9979/math-shepherd-mistral-7b-prm", "region:us" ]
null
2025-06-18T18:15:28Z
--- base_model: peiyi9979/math-shepherd-mistral-7b-prm library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
young-j-park/Qwen2.5-Math-PRM-7B-calibrated-DeepSeek-R1-Distill-Llama-8B
young-j-park
2025-06-18T18:18:34Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-Math-PRM-7B", "base_model:adapter:Qwen/Qwen2.5-Math-PRM-7B", "region:us" ]
null
2025-06-04T06:10:16Z
--- base_model: Qwen/Qwen2.5-Math-PRM-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
young-j-park/Qwen2.5-Math-PRM-7B-calibrated-Qwen2.5-Math-1.5B-Instruct
young-j-park
2025-06-18T18:18:27Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-Math-PRM-7B", "base_model:adapter:Qwen/Qwen2.5-Math-PRM-7B", "region:us" ]
null
2025-06-04T06:10:15Z
--- base_model: Qwen/Qwen2.5-Math-PRM-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
BootesVoid/cmc22wern0bzprdqsrqsxjdlk_cmc28xvyy0cd9rdqsyhbqk0a1
BootesVoid
2025-06-18T18:18:21Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-18T18:18:19Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: COCO --- # Cmc22Wern0Bzprdqsrqsxjdlk_Cmc28Xvyy0Cd9Rdqsyhbqk0A1 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `COCO` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "COCO", "lora_weights": "https://huggingface.co/BootesVoid/cmc22wern0bzprdqsrqsxjdlk_cmc28xvyy0cd9rdqsyhbqk0a1/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc22wern0bzprdqsrqsxjdlk_cmc28xvyy0cd9rdqsyhbqk0a1', weight_name='lora.safetensors') image = pipeline('COCO').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc22wern0bzprdqsrqsxjdlk_cmc28xvyy0cd9rdqsyhbqk0a1/discussions) to add images that show off what you’ve made with this LoRA.
schmuell/SmolLM2-1.7B-Instruct
schmuell
2025-06-18T18:06:50Z
0
0
transformers
[ "transformers", "onnx", "llama", "text-generation", "safetensors", "transformers.js", "conversational", "en", "arxiv:2502.02737", "base_model:HuggingFaceTB/SmolLM2-1.7B", "base_model:quantized:HuggingFaceTB/SmolLM2-1.7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T18:05:50Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: text-generation tags: - safetensors - onnx - transformers.js base_model: - HuggingFaceTB/SmolLM2-1.7B --- # SmolLM2 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/y45hIMNREW7w_XpHYB_0q.png) ## Table of Contents 1. [Model Summary](#model-summary) 2. [Evaluation](#evaluation) 3. [Examples](#examples) 4. [Limitations](#limitations) 5. [Training](#training) 6. [License](#license) 7. [Citation](#citation) ## Model Summary SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. More details in our paper: https://arxiv.org/abs/2502.02737v1 The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1). You can find the SFT dataset here: https://huggingface.co/datasets/HuggingFaceTB/smoltalk. For more details refer to: https://github.com/huggingface/smollm. You will find pre-training, post-training, evaluation and local inference code. ### How to use #### Transformers ```bash pip install transformers ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM2-1.7B-Instruct" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) messages = [{"role": "user", "content": "What is the capital of France."}] input_text=tokenizer.apply_chat_template(messages, tokenize=False) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0])) ``` #### Chat in TRL You can also use the TRL CLI to chat with the model from the terminal: ```bash pip install trl trl chat --model_name_or_path HuggingFaceTB/SmolLM2-1.7B-Instruct --device cpu ``` #### Transformers.js ```bash npm i @huggingface/transformers ``` ```js import { pipeline } from "@huggingface/transformers"; // Create a text generation pipeline const generator = await pipeline( "text-generation", "HuggingFaceTB/SmolLM2-1.7B-Instruct", ); // Define the list of messages const messages = [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Tell me a joke." }, ]; // Generate a response const output = await generator(messages, { max_new_tokens: 128 }); console.log(output[0].generated_text.at(-1).content); // "Why don't scientists trust atoms?\n\nBecause they make up everything!" ``` ## Evaluation In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them. ## Base Pre-Trained Model | Metric | SmolLM2-1.7B | Llama-1B | Qwen2.5-1.5B | SmolLM1-1.7B | |------------------|--------------|-------------|---------------|--------------| | HellaSwag | **68.7** | 61.2 | 66.4 | 62.9 | | ARC (Average) | **60.5** | 49.2 | 58.5 | 59.9 | | PIQA | **77.6** | 74.8 | 76.1 | 76.0 | | MMLU-Pro (MCF) | **19.4** | 11.7 | 13.7 | 10.8 | | CommonsenseQA | **43.6** | 41.2 | 34.1 | 38.0 | | TriviaQA | **36.7** | 28.1 | 20.9 | 22.5 | | Winogrande | **59.4** | 57.8 | 59.3 | 54.7 | | OpenBookQA | 42.2 | 38.4 | 40.0 | **42.4** | | GSM8K (5-shot) | 31.0 | 7.2 | **61.3** | 5.5 | ## Instruction Model | Metric | SmolLM2-1.7B-Instruct | Llama-1B-Instruct | Qwen2.5-1.5B-Instruct | SmolLM1-1.7B-Instruct | |:-----------------------------|:---------------------:|:-----------------:|:----------------------:|:----------------------:| | IFEval (Average prompt/inst) | **56.7** | 53.5 | 47.4 | 23.1 | | MT-Bench | 6.13 | 5.48 | **6.52** | 4.33 | | OpenRewrite-Eval (micro_avg RougeL) | 44.9 | 39.2 | **46.9** | NaN | | HellaSwag | **66.1** | 56.1 | 60.9 | 55.5 | | ARC (Average) | **51.7** | 41.6 | 46.2 | 43.7 | | PIQA | **74.4** | 72.3 | 73.2 | 71.6 | | MMLU-Pro (MCF) | 19.3 | 12.7 | **24.2** | 11.7 | | BBH (3-shot) | 32.2 | 27.6 | **35.3** | 25.7 | | GSM8K (5-shot) | **48.2** | 26.8 | 42.8 | 4.62 | ## Examples Below are some system and instruct prompts that work well for special tasks ### Text rewriting ```python system_prompt_rewrite = "You are an AI writing assistant. Your task is to rewrite the user's email to make it more professional and approachable while maintaining its main points and key message. Do not return any text other than the rewritten message." user_prompt_rewrite = "Rewrite the message below to make it more friendly and approachable while maintaining its main points and key message. Do not add any new information or return any text other than the rewritten message\nThe message:" messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content":f"{user_prompt_rewrite} The CI is failing after your last commit!"}] input_text=tokenizer.apply_chat_template(messages, tokenize=False) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ``` Hey there! I noticed that the CI isn't passing after your latest commit. Could you take a look and let me know what's going on? Thanks so much for your help! ``` ### Summarization ```python system_prompt_summarize = "Provide a concise, objective summary of the input text in up to three sentences, focusing on key actions and intentions without using second or third person pronouns." messages = [{"role": "system", "content": system_prompt_summarize}, {"role": "user", "content": INSERT_LONG_EMAIL}] input_text=tokenizer.apply_chat_template(messages, tokenize=False) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ### Function calling SmolLM2-1.7B-Instruct can handle function calling, it scores 27% on the [BFCL Leaderboard](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html). Here's how you can leverage it: ```python import json import re from typing import Optional from jinja2 import Template import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.utils import get_json_schema system_prompt = Template("""You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose. If none of the functions can be used, point it out and refuse to answer. If the given question lacks the parameters required by the function, also point it out. You have access to the following tools: <tools>{{ tools }}</tools> The output MUST strictly adhere to the following format, and NO other text MUST be included. The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please make the tool calls an empty list '[]'. <tool_call>[ {"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}}, ... (more tool calls as required) ]</tool_call>""") def prepare_messages( query: str, tools: Optional[dict[str, any]] = None, history: Optional[list[dict[str, str]]] = None ) -> list[dict[str, str]]: """Prepare the system and user messages for the given query and tools. Args: query: The query to be answered. tools: The tools available to the user. Defaults to None, in which case if a list without content will be passed to the model. history: Exchange of messages, including the system_prompt from the first query. Defaults to None, the first message in a conversation. """ if tools is None: tools = [] if history: messages = history.copy() messages.append({"role": "user", "content": query}) else: messages = [ {"role": "system", "content": system_prompt.render(tools=json.dumps(tools))}, {"role": "user", "content": query} ] return messages def parse_response(text: str) -> str | dict[str, any]: """Parses a response from the model, returning either the parsed list with the tool calls parsed, or the model thought or response if couldn't generate one. Args: text: Response from the model. """ pattern = r"<tool_call>(.*?)</tool_call>" matches = re.findall(pattern, text, re.DOTALL) if matches: return json.loads(matches[0]) return text model_name_smollm = "HuggingFaceTB/SmolLM2-1.7B-Instruct" model = AutoModelForCausalLM.from_pretrained(model_name_smollm, device_map="auto", torch_dtype="auto", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_name_smollm) from datetime import datetime import random def get_current_time() -> str: """Returns the current time in 24-hour format. Returns: str: Current time in HH:MM:SS format. """ return datetime.now().strftime("%H:%M:%S") def get_random_number_between(min: int, max: int) -> int: """ Gets a random number between min and max. Args: min: The minimum number. max: The maximum number. Returns: A random number between min and max. """ return random.randint(min, max) tools = [get_json_schema(get_random_number_between), get_json_schema(get_current_time)] toolbox = {"get_random_number_between": get_random_number_between, "get_current_time": get_current_time} query = "Give me a number between 1 and 300" messages = prepare_messages(query, tools=tools) inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) result = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True) tool_calls = parse_response(result) # [{'name': 'get_random_number_between', 'arguments': {'min': 1, 'max': 300}} # Get tool responses tool_responses = [toolbox.get(tc["name"])(*tc["arguments"].values()) for tc in tool_calls] # [63] # For the second turn, rebuild the history of messages: history = messages.copy() # Add the "parsed response" history.append({"role": "assistant", "content": result}) query = "Can you give me the hour?" history.append({"role": "user", "content": query}) inputs = tokenizer.apply_chat_template(history, add_generation_prompt=True, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) result = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True) tool_calls = parse_response(result) tool_responses = [toolbox.get(tc["name"])(*tc["arguments"].values()) for tc in tool_calls] # ['07:57:25'] ``` More details such as parallel function calls and tools not available can be found [here](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct/blob/main/instructions_function_calling.md) ## Limitations SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content. ## Training ### Model - **Architecture:** Transformer decoder - **Pretraining tokens:** 11T - **Precision:** bfloat16 ### Hardware - **GPUs:** 256 H100 ### Software - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main) - **Alignment Handbook** [alignment-handbook](https://github.com/huggingface/alignment-handbook/) ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Citation ```bash @misc{allal2025smollm2smolgoesbig, title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf}, year={2025}, eprint={2502.02737}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2502.02737}, } ```
New-tutorial-katrina-lim-18-viral-Video/FULL.VIDEO.katrina.lim.viral.kiffy.Viral.Video.Tutorial.Official
New-tutorial-katrina-lim-18-viral-Video
2025-06-18T18:02:29Z
0
0
null
[ "region:us" ]
null
2025-06-18T18:01:38Z
[![image/png](https://cdn-uploads.huggingface.co/production/uploads/6852fcbaa057a248276cc77c/C03TpPiuv2j-1G-EmEetI.png)](https://t.co/qxXmUaycCn)
Flickinshots/dqn-SpaceInvadersNoFrameskip-v4
Flickinshots
2025-06-18T17:59:33Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-18T17:58:58Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 581.00 +/- 184.88 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Flickinshots -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Flickinshots -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Flickinshots ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
morturr/Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb3-seed7-2025-06-18
morturr
2025-06-18T17:59:11Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T17:58:49Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb3-seed7-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb3-seed7-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
Jonny001/Vortex_Lab
Jonny001
2025-06-18T17:56:13Z
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:diffusion-lora", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-06-18T17:46:51Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/1.jpg - text: '-' output: url: images/2.jpg - text: '-' output: url: images/3.jpg - text: '-' output: url: images/4.jpg - text: '-' output: url: images/5.jpg - text: '-' output: url: images/6.jpg - text: '-' base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: creativeml-openrail-m language: - en --- # ⚠ This model has the capability to generate NSFW images. Use responsibly. # Sample Images <Gallery /> --- ## Vortex Lab FLUX ## Model Name: Vortex Lab ## Base Model: Flux.1 D ## Type: Checkpoint Trained ## Version: v1.0 The Vortex Lab model is a Checkpoint Trained generative model built upon the Flux.1 D base. Designed for high-quality image synthesis, this model excels in producing detailed and expressive visuals. It performs especially well in generating stylized and imaginative content, making it a versatile choice for artists and creators working with AI-driven imagery. ## Download model Weights for this model are available in Safetensors format. [Download](/Jonny001/Vortex_Lab/tree/main) them in the Files & versions tab. --- ## Credits Click [Here](https://civitai.com/models/1583850/vortex-lab-flux)
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed28-2025-06-18
morturr
2025-06-18T17:55:59Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T17:55:43Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed28-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed28-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
morturr/Mistral-7B-v0.1-amazon-seed-18-2025-06-18
morturr
2025-06-18T17:55:16Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-06-18T17:55:07Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - trl - sft - generated_from_trainer model-index: - name: Mistral-7B-v0.1-amazon-seed-18-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-amazon-seed-18-2025-06-18 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
igorktech/skommarkhos-lucie7binstructv1-1-sft-arpo-a13
igorktech
2025-06-18T17:51:46Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "cpo", "arxiv:2401.08417", "base_model:OpenLLM-France/Lucie-7B-Instruct-v1.1", "base_model:finetune:OpenLLM-France/Lucie-7B-Instruct-v1.1", "endpoints_compatible", "region:us" ]
null
2025-06-18T17:03:27Z
--- base_model: OpenLLM-France/Lucie-7B-Instruct-v1.1 library_name: transformers model_name: skommarkhos-lucie7binstructv1-1-sft-arpo-a13 tags: - generated_from_trainer - trl - cpo licence: license --- # Model Card for skommarkhos-lucie7binstructv1-1-sft-arpo-a13 This model is a fine-tuned version of [OpenLLM-France/Lucie-7B-Instruct-v1.1](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct-v1.1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="igorktech/skommarkhos-lucie7binstructv1-1-sft-arpo-a13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/igorktech01/joker-pun-translation/runs/o6l6mntt) This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417). ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite CPO as: ```bibtex @inproceedings{xu2024contrastive, title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}}, author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim}, year = 2024, booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024}, publisher = {OpenReview.net}, url = {https://openreview.net/forum?id=51iwkioZpn} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Official-mezzo-fun-18-Viral-videos/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
Official-mezzo-fun-18-Viral-videos
2025-06-18T17:50:36Z
0
0
null
[ "region:us" ]
null
2025-06-18T17:50:14Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
cesarali/StudyTransfomerPK_cluster
cesarali
2025-06-18T17:50:01Z
0
0
generative-pk
[ "generative-pk", "pytorch", "node_pk", "predictive", "en", "dataset:simulated", "license:apache-2.0", "region:us" ]
null
2025-06-18T17:10:55Z
--- language: - en license: apache-2.0 library_name: generative-pk datasets: - simulated metrics: - rmse - npde tags: - predictive --- # Study NODE PK Prediction ## Overview An Amortized Context Neural ODE for Pharmacokinetic Prediction that aggregates individual behavior per substance **Model details:** - **Authors:** César Ojeda (@cesarali) - **License:** Apache 2.0 ## Intended use Sample Drug Concentration Behavior
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb3-seed28-2025-06-18
morturr
2025-06-18T17:46:42Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T17:46:26Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb3-seed28-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb3-seed28-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
s0mecode/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF
s0mecode
2025-06-18T17:45:31Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-18T17:45:13Z
--- license: mit library_name: transformers base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B tags: - llama-cpp - gguf-my-repo --- # s0mecode/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-0528-Qwen3-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo s0mecode/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo s0mecode/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo s0mecode/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo s0mecode/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048 ```
New-tutorial-shah-sapna-18-videos/FULL.VIDEO.sapna.shah.viral.video.Link.viral.On.Social.Media.Official
New-tutorial-shah-sapna-18-videos
2025-06-18T17:45:27Z
0
0
null
[ "region:us" ]
null
2025-06-18T17:45:21Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a> <a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
str20tbl/orpheus3b-cy-en
str20tbl
2025-06-18T17:44:17Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/orpheus-3b-0.1-ft", "base_model:finetune:unsloth/orpheus-3b-0.1-ft", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-18T17:43:53Z
--- base_model: unsloth/orpheus-3b-0.1-ft tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** str20tbl - **License:** apache-2.0 - **Finetuned from model :** unsloth/orpheus-3b-0.1-ft This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
LILLY-TINO/wATCH.LILLY-TINO-LILLY-TINO-LILLY-TINO.original
LILLY-TINO
2025-06-18T17:38:20Z
0
0
null
[ "region:us" ]
null
2025-06-18T17:38:13Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a> <a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
dgambettaphd/M_llm2_run2_gen7_WXS_doc1000_synt120_lr1e-04_acm_SYNLAST
dgambettaphd
2025-06-18T17:38:15Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-18T17:38:01Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sgonzalezygil/sd-finetuning-dreambooth-v12-2000
sgonzalezygil
2025-06-18T17:37:47Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-18T17:35:10Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morturr/Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed7-2025-06-18
morturr
2025-06-18T17:36:59Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T17:36:50Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed7-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed7-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
codewithRiz/cindyprado
codewithRiz
2025-06-18T17:35:44Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-18T17:33:43Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym widget: - output: url: sample/cindyprado_003000_00_20250618222828.png text: cindyprado a woman in a pink bikini and straw hat sitting on a beach chair - output: url: sample/cindyprado_003000_01_20250618222907.png text: cindyprado a woman wearing a white bikini top and a straw hat - output: url: sample/cindyprado_003000_02_20250618222946.png text: cindyprado a woman wearing a black jumpsuit and sunglasses - output: url: sample/cindyprado_003000_03_20250618223025.png text: cindyprado a woman in a gold dress standing and holding a gold purse. - output: url: sample/cindyprado_003000_04_20250618223105.png text: cindyprado a woman in a blue bikini standing on the beach - output: url: sample/cindyprado_003000_05_20250618223144.png text: cindyprado a woman in an orange dress standing in front of a building, base_model: black-forest-labs/FLUX.1-dev instance_prompt: cindyprado license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # cindyprado A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `cindyprado` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
New-Viral-mezzo-fun-Viral-Video/Original.Full.Clip.mezzo.fun.Viral.Video.Leaks.Official
New-Viral-mezzo-fun-Viral-Video
2025-06-18T17:31:06Z
0
0
null
[ "region:us" ]
null
2025-06-18T17:30:59Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a> <a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
shaddie/rocketpill_ts_informer_model
shaddie
2025-06-18T17:27:10Z
4
0
transformers
[ "transformers", "safetensors", "informer", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T21:07:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaxTGH/SDXL1e-3
MaxTGH
2025-06-18T17:25:41Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-18T16:35:13Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a drone image of a humpback whale widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - MaxTGH/SDXL1e-3 <Gallery /> ## Model description These are MaxTGH/SDXL1e-3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use a drone image of a humpback whale to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](MaxTGH/SDXL1e-3/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
JGamonalHML/FondoEsperanzav6.0
JGamonalHML
2025-06-18T17:21:57Z
0
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2025-06-18T17:21:41Z
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # FondoEsperanzav6.0 This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("JGamonalHML/FondoEsperanzav6.0") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 60 * Number of training documents: 12530 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | racismo - alegación - discriminación - - | 63 | -1_racismo_alegación_discriminación_ | | 0 | emprendedores - pequeños - apoyo - asistencia - apoya | 1 | 0_emprendedores_pequeños_apoyo_asistencia | | 1 | empresas - pequeñas - empresarial - crecimiento - empresariales | 816 | 1_empresas_pequeñas_empresarial_crecimiento | | 2 | emprendimiento - recursos - oportunidades - apoyo - networking | 742 | 2_emprendimiento_recursos_oportunidades_apoyo | | 3 | pago - flexibles - opciones - mensual - semanal | 596 | 3_pago_flexibles_opciones_mensual | | 4 | capital - financiamiento - financiera - financiación - inyección | 532 | 4_capital_financiamiento_financiera_financiación | | 5 | texto - aborda - discute - herramienta - utilidad | 371 | 5_texto_aborda_discute_herramienta | | 6 | reuniones - miembros - presenciales - debido - tiempo | 465 | 6_reuniones_miembros_presenciales_debido | | 7 | esperanza - fondo - fundación - usuario - apoyo | 426 | 7_esperanza_fondo_fundación_usuario | | 8 | crédito - acceso - bancario - facilidad - accesible | 578 | 8_crédito_acceso_bancario_facilidad | | 9 | positiva - experiencia - descrita - usuario - satisfactoria | 396 | 9_positiva_experiencia_descrita_usuario | | 10 | bajos - tipos - intereses - interés - préstamos | 276 | 10_bajos_tipos_intereses_interés | | 11 | negocio - iniciar - negocios - inicio - expandir | 256 | 11_negocio_iniciar_negocios_inicio | | 12 | transparencia - discusión - falta - importancia - transparente | 484 | 12_transparencia_discusión_falta_importancia | | 13 | socios - hacia - compromiso - relaciones - falta | 179 | 13_socios_hacia_compromiso_relaciones | | 14 | pagos - préstamo - semanales - mensuales - cantidad | 225 | 14_pagos_préstamo_semanales_mensuales | | 15 | herramienta - útil - beneficiosa - empresarial - crecimiento | 255 | 15_herramienta_útil_beneficiosa_empresarial | | 16 | personal - beneficio - crecimiento - experiencia - desarrollo | 403 | 16_personal_beneficio_crecimiento_experiencia | | 17 | tasas - bajas - interés - tasa - baja | 340 | 17_tasas_bajas_interés_tasa | | 18 | bajo - interés - préstamos - tipo - costo | 458 | 18_bajo_interés_préstamos_tipo | | 19 | institución - sistema - reputación - eficiencia - confiable | 255 | 19_institución_sistema_reputación_eficiencia | | 20 | servicio - producto - hecha - calidad - recomendación | 198 | 20_servicio_producto_hecha_calidad | | 21 | asistencia - proporcionada - significativa - recibida - ayuda | 196 | 21_asistencia_proporcionada_significativa_recibida | | 22 | grupo - confianza - equipo - recepción - positiva | 306 | 22_grupo_confianza_equipo_recepción | | 23 | microemprendedores - microempresas - microempresarios - apoyo - ayuda | 224 | 23_microemprendedores_microempresas_microempresarios_apoyo | | 24 | razones - cuales - elección - opción - buen | 85 | 24_razones_cuales_elección_opción | | 25 | individuo - independencia - independientes - independiente - trabajadores | 134 | 25_individuo_independencia_independientes_independiente | | 26 | detalles - específicos - especificar - experiencia - positiva | 114 | 26_detalles_específicos_especificar_experiencia | | 27 | recomendación - hecho - calidad - recomendado - realizada | 87 | 27_recomendación_hecho_calidad_recomendado | | 28 | asesor - asesores - parte - cambios - problemas | 111 | 28_asesor_asesores_parte_cambios | | 29 | inversión - motivos - opción - cuales - adecuada | 218 | 29_inversión_motivos_opción_cuales | | 30 | fiabilidad - seguridad - factores - explicación - confiabilidad | 140 | 30_fiabilidad_seguridad_factores_explicación | | 31 | pueden - individuos - bancarios - acceder - préstamos | 125 | 31_pueden_individuos_bancarios_acceder | | 32 | responsabilidad - responsables - individuos - organización - grupo | 249 | 32_responsabilidad_responsables_individuos_organización | | 33 | proyectos - proyecto - crecimiento - oportunidad - desarrollo | 225 | 33_proyectos_proyecto_crecimiento_oportunidad | | 34 | información - claridad - clara - comunicación - falta | 160 | 34_información_claridad_clara_comunicación | | 35 | proporcionado - apoyo - gratitud - expresión - recibido | 127 | 35_proporcionado_apoyo_gratitud_expresión | | 36 | startups - startup - financiar - financiamiento - recursos | 151 | 36_startups_startup_financiar_financiamiento | | 37 | pymes - medianas - pequeñas - empresas - apoyo | 104 | 37_pymes_medianas_pequeñas_empresas | | 38 | proceso - documentación - mínima - documentos - firmas | 113 | 38_proceso_documentación_mínima_documentos | | 39 | banco - comunitario - calidad - satisfacción - organizado | 83 | 39_banco_comunitario_calidad_satisfacción | | 40 | económica - económico - asistencia - apoyo - ayuda | 161 | 40_económica_económico_asistencia_apoyo | | 41 | 15 - días - cada - fechas - frecuentes | 108 | 41_15_días_cada_fechas | | 42 | banco - ofreciendo - apoya - adecuado - oportunidades | 45 | 42_banco_ofreciendo_apoya_adecuado | | 43 | seguridad - seguro - características - cobertura - contrato | 129 | 43_seguridad_seguro_características_cobertura | | 44 | comerciales - operaciones - comercial - emprendimientos - utilizado | 97 | 44_comerciales_operaciones_comercial_emprendimientos | | 45 | fácil - obtención - acceso - crédito - fondos | 77 | 45_fácil_obtención_acceso_crédito | | 46 | empatía - falta - problemas - hacia - parte | 71 | 46_empatía_falta_problemas_hacia | | 47 | hope - fund - sido - asesores - personal | 92 | 47_hope_fund_sido_asesores | | 48 | flexibilidad - accesibilidad - limitada - rapidez - clave | 32 | 48_flexibilidad_accesibilidad_limitada_rapidez | | 49 | mínimo - interés - mínima - implementación - niveles | 76 | 49_mínimo_interés_mínima_implementación | | 50 | liderazgo - líder - grupo - miembros - falta | 52 | 50_liderazgo_líder_grupo_miembros | | 51 | papeleo - mínimo - menos - requerido - rápido | 35 | 51_papeleo_mínimo_menos_requerido | | 52 | datos - ia - herramientas - análisis - clientes | 34 | 52_datos_ia_herramientas_análisis | | 53 | sirve - día - herramienta - contrario - individualmente | 38 | 53_sirve_día_herramienta_contrario | | 54 | the - discusses - for - of - and | 46 | 54_the_discusses_for_of | | 55 | dicom - interesados - seguros - materias - primas | 18 | 55_dicom_interesados_seguros_materias | | 56 | alternativo - financiamiento - opción - alternativa - viable | 32 | 56_alternativo_financiamiento_opción_alternativa | | 57 | artículo - situación - conveniencia - cierto - práctico | 84 | 57_artículo_situación_conveniencia_cierto | | 58 | seriedad - compromiso - énfasis - profesionalismo - responsabilidad | 36 | 58_seriedad_compromiso_énfasis_profesionalismo | </details> ## Training hyperparameters * calculate_probabilities: False * language: None * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: 60 * seed_topic_list: None * top_n_words: 10 * verbose: False * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 2.2.5 * HDBSCAN: 0.8.40 * UMAP: 0.5.7 * Pandas: 2.2.3 * Scikit-Learn: 1.6.1 * Sentence-transformers: 4.1.0 * Transformers: 4.51.3 * Numba: 0.61.2 * Plotly: 6.0.1 * Python: 3.12.1
ECE-ILAB/POIROT-ECE-1.0
ECE-ILAB
2025-06-18T17:21:10Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "mergekit", "merge", "conversational", "base_model:AXCXEPT/Qwen3-EZO-8B-beta", "base_model:merge:AXCXEPT/Qwen3-EZO-8B-beta", "base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1", "base_model:merge:Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T17:05:53Z
--- base_model: - AXCXEPT/Qwen3-EZO-8B-beta - Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1 library_name: transformers tags: - mergekit - merge --- # POIROT-ECE-1.0 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [AXCXEPT/Qwen3-EZO-8B-beta](https://huggingface.co/AXCXEPT/Qwen3-EZO-8B-beta) * [Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: AXCXEPT/Qwen3-EZO-8B-beta layer_range: [0, 35] - model: Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1 layer_range: [0, 35] merge_method: slerp base_model: AXCXEPT/Qwen3-EZO-8B-beta parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
phospho-app/gc1724-ACT_BBOX-bottle-hwyfx
phospho-app
2025-06-18T17:19:49Z
0
0
null
[ "phosphobot", "act", "region:us" ]
null
2025-06-18T17:18:47Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` [Errno 20] Not a directory: '/__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/gc1724/bottle_bboxes/videos/chunk-000/.DS_Store' ``` ## Training parameters: - **Dataset**: [gc1724/bottle](https://huggingface.co/datasets/gc1724/bottle) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
ITHwangg/lebotica-pickplace-v3-step1k
ITHwangg
2025-06-18T17:18:50Z
0
0
null
[ "safetensors", "dataset:ITHwangg/svla_koch_pickplace_v3", "license:mit", "region:us" ]
null
2025-06-15T09:04:41Z
--- datasets: - ITHwangg/svla_koch_pickplace_v3 license: mit --- # lebotica-pickplace-v3-step1k - Dataset: [ITHwangg/svla_koch_pickplace_v3](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace_v3) - Model: [ITHwangg/lebotica-pickplace-15k](https://huggingface.co/ITHwangg/lebotica-pickplace-15k)
ITHwangg/lebotica-pickplace-v2-step5k
ITHwangg
2025-06-18T17:14:32Z
0
0
null
[ "safetensors", "dataset:ITHwangg/svla_koch_pickplace_v2", "license:mit", "region:us" ]
null
2025-06-15T05:26:25Z
--- datasets: - ITHwangg/svla_koch_pickplace_v2 license: mit --- # lebotica-pickplace-v2-step5k - Dataset: [ITHwangg/svla_koch_pickplace_v2](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace_v2) - Model: [ITHwangg/lebotica-pickplace-15k](https://huggingface.co/ITHwangg/lebotica-pickplace-15k)
GraybeardTheIrate/Harbinger-Cogwheel
GraybeardTheIrate
2025-06-18T17:11:00Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:LatitudeGames/Harbinger-24B", "base_model:merge:LatitudeGames/Harbinger-24B", "base_model:OddTheGreat/Cogwheel_24b_V.2", "base_model:merge:OddTheGreat/Cogwheel_24b_V.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T14:38:42Z
--- base_model: - OddTheGreat/Cogwheel_24b_V.2 - LatitudeGames/Harbinger-24B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [OddTheGreat/Cogwheel_24b_V.2](https://huggingface.co/OddTheGreat/Cogwheel_24b_V.2) * [LatitudeGames/Harbinger-24B](https://huggingface.co/LatitudeGames/Harbinger-24B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: LatitudeGames/Harbinger-24B - model: OddTheGreat/Cogwheel_24b_V.2 merge_method: slerp base_model: LatitudeGames/Harbinger-24B dtype: bfloat16 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 ```
ITHwangg/lebotica-pickplace-v2-step1k
ITHwangg
2025-06-18T17:10:59Z
0
0
null
[ "safetensors", "dataset:ITHwangg/svla_koch_pickplace_v2", "license:mit", "region:us" ]
null
2025-06-15T04:06:10Z
--- datasets: - ITHwangg/svla_koch_pickplace_v2 license: mit --- # lebotica-pickplace-v2-step1k - Dataset: [ITHwangg/svla_koch_pickplace_v2](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace_v2) - Model: [ITHwangg/lebotica-pickplace-15k](https://huggingface.co/ITHwangg/lebotica-pickplace-15k)
ITHwangg/lebotica-pickplace-stacking-step15k
ITHwangg
2025-06-18T17:07:54Z
0
0
null
[ "safetensors", "dataset:ITHwangg/svla_koch_pickplace_and_stacking", "license:mit", "region:us" ]
null
2025-06-15T01:54:14Z
--- datasets: - ITHwangg/svla_koch_pickplace_and_stacking license: mit --- # lebotica-pickplace-stacking-step15k - Dataset: [ITHwangg/svla_koch_pickplace_and_stacking](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace_and_stacking) - Model: [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base)
ITHwangg/lebotica-pickplace-stacking-step10k
ITHwangg
2025-06-18T17:07:21Z
0
0
null
[ "safetensors", "dataset:ITHwangg/svla_koch_pickplace_and_stacking", "license:mit", "region:us" ]
null
2025-06-15T00:27:13Z
--- datasets: - ITHwangg/svla_koch_pickplace_and_stacking license: mit --- # lebotica-pickplace-stacking-step10k - Dataset: [ITHwangg/svla_koch_pickplace_and_stacking](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace_and_stacking) - Model: [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base)
vcabeli/Qwen3-8B-Open-R1-GRPO-signature-expression
vcabeli
2025-06-18T17:05:52Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-17T16:29:47Z
--- base_model: Qwen/Qwen3-8B library_name: transformers model_name: Qwen3-8B-Open-R1-GRPO-signature-expression tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen3-8B-Open-R1-GRPO-signature-expression This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vcabeli/Qwen3-8B-Open-R1-GRPO-signature-expression", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vincent-cabeli-owkin/huggingface/runs/cughyzye) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ITHwangg/lebotica-pickplace-stacking-step1k
ITHwangg
2025-06-18T17:05:23Z
0
0
null
[ "safetensors", "dataset:ITHwangg/svla_koch_pickplace_and_stacking", "license:mit", "region:us" ]
null
2025-06-15T00:24:59Z
--- datasets: - ITHwangg/svla_koch_pickplace_and_stacking license: mit --- # lebotica-pickplace-stacking-step1k - Dataset: [ITHwangg/svla_koch_pickplace_and_stacking](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace_and_stacking) - Model: [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base)
sgonzalezygil/sd-finetuning-dreambooth-v12
sgonzalezygil
2025-06-18T17:05:14Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-18T17:03:19Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ITHwangg/lebotica-pickplace-step15k
ITHwangg
2025-06-18T17:03:46Z
0
0
null
[ "safetensors", "dataset:ITHwangg/svla_koch_pickplace", "license:mit", "region:us" ]
null
2025-06-15T02:02:59Z
--- datasets: - ITHwangg/svla_koch_pickplace license: mit --- # lebotica-pickplace-step15k - Dataset: [ITHwangg/svla_koch_pickplace](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace) - Model: [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base)
michaelbenayoun/granite-tiny-4kv-heads-4layers-random
michaelbenayoun
2025-06-18T16:59:22Z
0
0
transformers
[ "transformers", "safetensors", "granite", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T16:59:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
The-Welcomer/high-accuracy
The-Welcomer
2025-06-18T16:51:52Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T16:42:19Z
--- base_model: unsloth/qwen3-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** The-Welcomer - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
harriskr14/emotion-classification
harriskr14
2025-06-18T16:47:45Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-18T09:09:41Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: emotion-classification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.51875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion-classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3560 - Accuracy: 0.5188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 5 | 1.6699 | 0.4313 | | 1.5821 | 2.0 | 10 | 1.6118 | 0.4562 | | 1.5821 | 3.0 | 15 | 1.5550 | 0.475 | | 1.445 | 4.0 | 20 | 1.5128 | 0.5062 | | 1.445 | 5.0 | 25 | 1.4508 | 0.5375 | | 1.3202 | 6.0 | 30 | 1.4364 | 0.5 | | 1.3202 | 7.0 | 35 | 1.3776 | 0.575 | | 1.2242 | 8.0 | 40 | 1.3966 | 0.5 | | 1.2242 | 9.0 | 45 | 1.3724 | 0.525 | | 1.1589 | 10.0 | 50 | 1.3483 | 0.525 | | 1.1589 | 11.0 | 55 | 1.3186 | 0.5687 | | 1.0962 | 12.0 | 60 | 1.3295 | 0.5375 | | 1.0962 | 13.0 | 65 | 1.3058 | 0.5875 | | 1.0542 | 14.0 | 70 | 1.3296 | 0.5375 | | 1.0542 | 15.0 | 75 | 1.3185 | 0.5813 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
morturr/Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb3-seed42-2025-06-18
morturr
2025-06-18T16:43:33Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T16:43:18Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb3-seed42-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb3-seed42-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
hyperonsol/kaka-memes
hyperonsol
2025-06-18T16:41:22Z
7
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-05T17:03:42Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: KAKA --- # Kaka Memes <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `KAKA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "KAKA", "lora_weights": "https://huggingface.co/hyperonsol/kaka-memes/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('hyperonsol/kaka-memes', weight_name='lora.safetensors') image = pipeline('KAKA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 5000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/hyperonsol/kaka-memes/discussions) to add images that show off what you’ve made with this LoRA.
shopitalic/waffle-towels-set
shopitalic
2025-06-18T16:39:23Z
0
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-18T16:39:18Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # waffle towels set <Gallery /> ## Model description ## Trigger words You should use `` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/shopitalic/waffle-towels-set/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
morturr/Mistral-7B-v0.1-headlines-seed-18-2025-06-18
morturr
2025-06-18T16:37:18Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-06-18T16:34:51Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - trl - sft - generated_from_trainer model-index: - name: Mistral-7B-v0.1-headlines-seed-18-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-headlines-seed-18-2025-06-18 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
spk1/tarmac_llama_instruct2
spk1
2025-06-18T16:34:40Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-17T21:17:53Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** spk1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
endlesstools/etMVadapter-i-endpoint
endlesstools
2025-06-18T16:28:53Z
0
0
null
[ "license:mit", "endpoints_compatible", "region:us" ]
null
2025-06-18T15:27:07Z
--- title: MV Adapter Img2Texture emoji: 🔮 colorFrom: purple colorTo: yellow sdk: gradio sdk_version: 5.23.1 app_file: app.py pinned: false license: mit short_description: Generate 3D texture from image --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
vishakr01/comp4_03
vishakr01
2025-06-18T16:27:57Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T16:24:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaxTGH/SDXLBase1e-4TS200
MaxTGH
2025-06-18T16:21:03Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-18T16:21:01Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: a drone image of a humpback whale output: url: images/image_3.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a drone image of a humpback whale license: openrail++ --- # SDXL LoRA DreamBooth <Gallery /> ## Model description These are MaxTGH&#x2F;Model LoRA adaption weights for stabilityai&#x2F;stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https:&#x2F;&#x2F;dreambooth.github.io&#x2F;). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use `a drone image of a humpback whale` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/MaxTGH/SDXLBase1e-4TS200/tree/main) them in the Files & versions tab.
huihui-ai/Huihui-Qwen3-8B-abliterated-v2
huihui-ai
2025-06-18T16:15:07Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "chat", "abliterated", "uncensored", "conversational", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T15:24:27Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE pipeline_tag: text-generation base_model: - Qwen/Qwen3-8B tags: - chat - abliterated - uncensored --- # huihui-ai/Huihui-Qwen3-8B-abliterated-v2 This is an uncensored version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. Ablation was performed using a new and faster method, which yields better results. **Important Note** This version is an improvement over the previous one [huihui-ai/Qwen3-8B-abliterated](https://huggingface.co/huihui-ai/Qwen3-8B-abliterated). The ollama version has also been modified. Changed the 0 layer to eliminate the problem of garbled codes ## ollama You can use [huihui_ai/qwen3-abliterated:8b-v2](https://ollama.com/huihui_ai/qwen3-abliterated:8b-v2) directly, Switch the thinking toggle using /set think and /set nothink ``` ollama run huihui_ai/qwen3-abliterated:8b-v2 ``` ## Usage You can use this model in your applications by loading it with Hugging Face's `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer import torch import os import signal import random import numpy as np import time from collections import Counter cpu_count = os.cpu_count() print(f"Number of CPU cores in the system: {cpu_count}") half_cpu_count = cpu_count // 2 os.environ["MKL_NUM_THREADS"] = str(half_cpu_count) os.environ["OMP_NUM_THREADS"] = str(half_cpu_count) torch.set_num_threads(half_cpu_count) print(f"PyTorch threads: {torch.get_num_threads()}") print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}") print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}") # Load the model and tokenizer NEW_MODEL_ID = "huihui-ai/Huihui-Qwen3-8B-abliterated-v2" print(f"Load Model {NEW_MODEL_ID} ... ") quant_config_4 = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, llm_int8_enable_fp32_cpu_offload=True, ) model = AutoModelForCausalLM.from_pretrained( NEW_MODEL_ID, device_map="auto", trust_remote_code=True, #quantization_config=quant_config_4, torch_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id messages = [] nothink = False same_seed = False skip_prompt=True skip_special_tokens=True do_sample = True def set_random_seed(seed=None): """Set random seed for reproducibility. If seed is None, use int(time.time()).""" if seed is None: seed = int(time.time()) # Convert float to int random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) # If using CUDA torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False return seed # Return seed for logging if needed class CustomTextStreamer(TextStreamer): def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True): super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens) self.generated_text = "" self.stop_flag = False self.init_time = time.time() # Record initialization time self.end_time = None # To store end time self.first_token_time = None # To store first token generation time self.token_count = 0 # To track total tokens def on_finalized_text(self, text: str, stream_end: bool = False): if self.first_token_time is None and text.strip(): # Set first token time on first non-empty text self.first_token_time = time.time() self.generated_text += text # Count tokens in the generated text tokens = self.tokenizer.encode(text, add_special_tokens=False) self.token_count += len(tokens) print(text, end="", flush=True) if stream_end: self.end_time = time.time() # Record end time when streaming ends if self.stop_flag: raise StopIteration def stop_generation(self): self.stop_flag = True self.end_time = time.time() # Record end time when generation is stopped def get_metrics(self): """Returns initialization time, first token time, first token latency, end time, total time, total tokens, and tokens per second.""" if self.end_time is None: self.end_time = time.time() # Set end time if not already set total_time = self.end_time - self.init_time # Total time from init to end tokens_per_second = self.token_count / total_time if total_time > 0 else 0 first_token_latency = (self.first_token_time - self.init_time) if self.first_token_time is not None else None metrics = { "init_time": self.init_time, "first_token_time": self.first_token_time, "first_token_latency": first_token_latency, "end_time": self.end_time, "total_time": total_time, # Total time in seconds "total_tokens": self.token_count, "tokens_per_second": tokens_per_second } return metrics def generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, max_new_tokens): input_ids = tokenizer.apply_chat_template( messages, tokenize=True, enable_thinking = not nothink, add_generation_prompt=True, return_tensors="pt" ) attention_mask = torch.ones_like(input_ids, dtype=torch.long) tokens = input_ids.to(model.device) attention_mask = attention_mask.to(model.device) streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens) def signal_handler(sig, frame): streamer.stop_generation() print("\n[Generation stopped by user with Ctrl+C]") signal.signal(signal.SIGINT, signal_handler) generate_kwargs = {} if do_sample: generate_kwargs = { "do_sample": do_sample, "max_length": max_new_tokens, "temperature": 0.6, "top_k": 20, "top_p": 0.95, "repetition_penalty": 1.2, "no_repeat_ngram_size": 2 } else: generate_kwargs = { "do_sample": do_sample, "max_length": max_new_tokens, "repetition_penalty": 1.2, "no_repeat_ngram_size": 2 } print("Response: ", end="", flush=True) try: generated_ids = model.generate( tokens, attention_mask=attention_mask, #use_cache=False, pad_token_id=tokenizer.pad_token_id, streamer=streamer, **generate_kwargs ) del generated_ids except StopIteration: print("\n[Stopped by user]") del input_ids, attention_mask torch.cuda.empty_cache() signal.signal(signal.SIGINT, signal.SIG_DFL) return streamer.generated_text, streamer.stop_flag, streamer.get_metrics() init_seed = set_random_seed() while True: if same_seed: set_random_seed(init_seed) else: init_seed = set_random_seed() print(f"\nnothink: {nothink}") print(f"skip_prompt: {skip_prompt}") print(f"skip_special_tokens: {skip_special_tokens}") print(f"do_sample: {do_sample}") print(f"same_seed: {same_seed}, {init_seed}\n") user_input = input("User: ").strip() if user_input.lower() == "/exit": print("Exiting chat.") break if user_input.lower() == "/clear": messages = [] print("Chat history cleared. Starting a new conversation.") continue if user_input.lower() == "/nothink": nothink = not nothink continue if user_input.lower() == "/skip_prompt": skip_prompt = not skip_prompt continue if user_input.lower() == "/skip_special_tokens": skip_special_tokens = not skip_special_tokens continue if user_input.lower().startswith("/same_seed"): parts = user_input.split() if len(parts) == 1: # /same_seed (no number) same_seed = not same_seed # Toggle switch elif len(parts) == 2: # /same_seed <number> try: init_seed = int(parts[1]) # Extract and convert number to int same_seed = True except ValueError: print("Error: Please provide a valid integer after /same_seed") continue if user_input.lower() == "/do_sample": do_sample = not do_sample continue if not user_input: print("Input cannot be empty. Please enter something.") continue messages.append({"role": "user", "content": user_input}) activated_experts = [] response, stop_flag, metrics = generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, 40960) print("\n\nMetrics:") for key, value in metrics.items(): print(f" {key}: {value}") print("", flush=True) if stop_flag: continue messages.append({"role": "assistant", "content": response}) # Remove all hooks after inference for h in hooks: h.remove() ``` ### Usage Warnings - **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. - **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. - **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. - **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. - **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. - **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use. ### Donation If you like it, please click 'like' and follow us for more updates. You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai. ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it. - bitcoin(BTC): ``` bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge ```
mlfoundations-dev/DeepSeek-R1-Distill-Qwen-1.5B_OpenThoughts3
mlfoundations-dev
2025-06-18T16:12:26Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T16:08:47Z
--- library_name: transformers license: other tags: - llama-factory - full - generated_from_trainer model-index: - name: DeepSeek-R1-Distill-Qwen-1.5B_OpenThoughts3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DeepSeek-R1-Distill-Qwen-1.5B_OpenThoughts3 This model is a fine-tuned version of [/leonardo_work/EUHPC_E03_068/DCFT_shared/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-1.5B/snapshots/ad9f0ae0864d7fbcd1cd905e3c6c5b069cc8b562](https://huggingface.co//leonardo_work/EUHPC_E03_068/DCFT_shared/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-1.5B/snapshots/ad9f0ae0864d7fbcd1cd905e3c6c5b069cc8b562) on the mlfoundations-dev/OpenThoughts3 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 512 - total_train_batch_size: 512 - total_eval_batch_size: 4096 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.0
ztwqd3n6/pony-diffusion-v6-xl
ztwqd3n6
2025-06-18T16:07:16Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-25T20:38:18Z
--- license: other license_name: fair-ai-public-license-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ ---
sergiopaniego/gemma-3-4b-pt-object-detection-loc-tokens
sergiopaniego
2025-06-18T16:04:37Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-18T16:01:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
neural-interactive-proofs/finetune_dpo_cv_test_lm_server_34_0_iter_0_provers_group_2025-06-18_17-02-34_Qwen_Qwen2.5-0.5B-I
neural-interactive-proofs
2025-06-18T16:03:18Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-18T16:03:13Z
--- base_model: Qwen/Qwen2.5-0.5B-Instruct library_name: transformers model_name: finetune_dpo_cv_test_lm_server_34_0_iter_0_provers_group_2025-06-18_17-02-34_Qwen_Qwen2.5-0.5B-I tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for finetune_dpo_cv_test_lm_server_34_0_iter_0_provers_group_2025-06-18_17-02-34_Qwen_Qwen2.5-0.5B-I This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_cv_test_lm_server_34_0_iter_0_provers_group_2025-06-18_17-02-34_Qwen_Qwen2.5-0.5B-I", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/Qwen_Qwen2.5-0.5B-Instruct_dpo_2025-06-18_17-02-34_cv_test_lm_server_34_0_iter_0_provers_group) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 2.21.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
LandCruiser/sn21_omg_1806_23
LandCruiser
2025-06-18T16:02:46Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T16:00:49Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
LandCruiser/sn21_omg_1806_18
LandCruiser
2025-06-18T16:02:40Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T15:45:50Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
LandCruiser/sn21_omg_1806_17
LandCruiser
2025-06-18T16:02:27Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T15:45:50Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
LandCruiser/sn21_omg_1806_16
LandCruiser
2025-06-18T16:02:23Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T15:45:49Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
LandCruiser/sn21_omg_1806_14
LandCruiser
2025-06-18T16:02:01Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T15:45:49Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
helmo/marian-finetuned-kde4-en-to-fr
helmo
2025-06-18T16:00:28Z
0
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2025-06-18T14:03:01Z
--- library_name: transformers license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 36.33596022358762 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8553 - Model Preparation Time: 0.0045 - Bleu: 36.3360 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0 - Datasets 3.5.0 - Tokenizers 0.21.1
BernalHR/V2Phi-3-mini-4k-instruct-Inscripciones-bnb-4bit-GGUF
BernalHR
2025-06-18T15:57:57Z
0
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:quantized:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-18T15:57:21Z
--- base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** BernalHR - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/RLPR-Qwen2.5-7B-Base-GGUF
mradermacher
2025-06-18T15:56:46Z
0
0
transformers
[ "transformers", "gguf", "en", "dataset:openbmb/RLPR-train", "base_model:RLAIF-V/RLPR-Qwen2.5-7B-Base", "base_model:quantized:RLAIF-V/RLPR-Qwen2.5-7B-Base", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-18T13:27:36Z
--- base_model: RLAIF-V/RLPR-Qwen2.5-7B-Base datasets: - openbmb/RLPR-train language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/RLAIF-V/RLPR-Qwen2.5-7B-Base <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
amgule/meme-model-merged
amgule
2025-06-18T15:55:46Z
14
0
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2-VL-2B-Instruct", "base_model:finetune:unsloth/Qwen2-VL-2B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-01T11:24:51Z
--- base_model: unsloth/Qwen2-VL-2B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2_vl license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** amgule - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-VL-2B-Instruct This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. Trained using HuggingFaceM4/the_cauldron dataset, [hateful_memes subset](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron/viewer/hateful_memes). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sgonzalezygil/sd-finetuning-dreambooth-v11-1200
sgonzalezygil
2025-06-18T15:55:32Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-18T15:53:53Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
epfl-dlab/zip2zip-Phi-3-medium-instruct-v0.1
epfl-dlab
2025-06-18T15:54:18Z
0
0
transformers
[ "transformers", "safetensors", "zip2zip", "arxiv:1910.09700", "arxiv:2506.01084", "base_model:microsoft/Phi-3-medium-4k-instruct", "base_model:finetune:microsoft/Phi-3-medium-4k-instruct", "endpoints_compatible", "region:us" ]
null
2025-06-18T15:53:11Z
--- library_name: transformers tags: - zip2zip base_model: microsoft/Phi-3-medium-4k-instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]# Zip2Zip This model is a [Zip2Zip](arxiv.org/abs/2506.01084) model.
Manchester-City-Wydad-AC-Direct-Video/Manchester.City.Wydad.AC.En.Direct.Streaming.Gratuit.tv.Official
Manchester-City-Wydad-AC-Direct-Video
2025-06-18T15:52:52Z
0
0
null
[ "region:us" ]
null
2025-06-18T15:49:17Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/mrmpsap6?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
LandCruiser/sn21_omg_1806_6
LandCruiser
2025-06-18T15:51:20Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T15:45:45Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
LandCruiser/sn21_omg_1806_9
LandCruiser
2025-06-18T15:51:15Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T15:45:46Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
LandCruiser/sn21_omg_1806_2
LandCruiser
2025-06-18T15:50:54Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T15:45:43Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
LandCruiser/sn21_omg_1806_4
LandCruiser
2025-06-18T15:50:33Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T15:45:44Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
SzilviaB/mergekit-passthrough-zyecuzy-Q5_K_M-GGUF
SzilviaB
2025-06-18T15:45:00Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:mergekit-community/mergekit-passthrough-zyecuzy", "base_model:quantized:mergekit-community/mergekit-passthrough-zyecuzy", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-18T15:44:23Z
--- base_model: mergekit-community/mergekit-passthrough-zyecuzy library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # SzilviaB/mergekit-passthrough-zyecuzy-Q5_K_M-GGUF This model was converted to GGUF format from [`mergekit-community/mergekit-passthrough-zyecuzy`](https://huggingface.co/mergekit-community/mergekit-passthrough-zyecuzy) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mergekit-community/mergekit-passthrough-zyecuzy) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo SzilviaB/mergekit-passthrough-zyecuzy-Q5_K_M-GGUF --hf-file mergekit-passthrough-zyecuzy-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo SzilviaB/mergekit-passthrough-zyecuzy-Q5_K_M-GGUF --hf-file mergekit-passthrough-zyecuzy-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo SzilviaB/mergekit-passthrough-zyecuzy-Q5_K_M-GGUF --hf-file mergekit-passthrough-zyecuzy-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo SzilviaB/mergekit-passthrough-zyecuzy-Q5_K_M-GGUF --hf-file mergekit-passthrough-zyecuzy-q5_k_m.gguf -c 2048 ```
dgambettaphd/M_llm2_run2_gen6_WXS_doc1000_synt120_lr1e-04_acm_SYNLAST
dgambettaphd
2025-06-18T15:44:12Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-18T15:43:57Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
andrewoh/facebook-opt-350m-finetuned-lifescience-v1
andrewoh
2025-06-18T15:42:17Z
0
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T15:41:27Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JMcoding92/bloomfield-distilbert-finetuned
JMcoding92
2025-06-18T15:41:42Z
0
1
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "home", "building", "customer", "service", "construction", "intent-classification", "en", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-18T15:09:46Z
--- library_name: transformers tags: - home - building - customer - service - construction - text-classification - intent-classification - distilbert license: mit language: - en base_model: - distilbert/distilbert-base-uncased --- # Model Card for Model ID # Bloomfield-DistilBERT-Finetuned ## Model Details ## Model Description This is a fine-tuned DistilBERT model (`distilbert-base-uncased`) for intent classification in the homebuilding sales domain, specifically for Bloomfield Homes, a homebuilder in the Dallas-Fort Worth Metroplex. The model classifies user queries into one of 12 intents relevant to homebuying, such as inquiries about incentives, floor plans, or community amenities. It is designed to integrate with a conversational AI system (e.g., Grok-3) to route queries to appropriate response logic. - **Base Model**: `distilbert-base-uncased` - **Task**: Text classification (12 intents) - **Intents**: - `career_queries`: Job or employment inquiries - `community_info`: Questions about community amenities (e.g., pools, parks) - `provide_contact_info`: Requests for contact details or model home addresses - `warranty_info`: Warranty or repair inquiries - `special_offers_and_incentives`: Questions about promotions or discounts - `financing_queries`: Inquiries about loans or interest rates - `payment_queries`: Questions about deposits or payment processes - `floor_plan_queries`: Inquiries about floor plans (e.g., "Bellflower") - `available_homes`: Requests for move-in-ready homes - `take_contact_info`: Requests to be contacted by a team member - `location_query`: Questions about community locations - `general_query`: Vague or unclassified queries (e.g., complaints) - to be used as the fallback intent when no others match. - **Language**: English - **License**: [MIT License](https://opensource.org/licenses/MIT) - **Repository**: [JMcoding92/bloomfield-distilbert-finetuned](https://huggingface.co/JMcoding92/bloomfield-distilbert-finetuned) - **Developed by:** [JMcoding92 - BuilderChat AI - CAN/USA] - **Funded by:** [BuilderChat AI] - **Shared by:** [JMcoding92 - BuilderChat] - **Model type:** Intended to be used for context/intent detection ONLY - for distilBERT - **Finetuned from model:** [More Information Needed] ## Intended Use This model is intended for use in a chatbot or conversational AI system for Bloomfield Homes to classify user queries into one of the 12 intents. The classified intent is passed to a backend system (e.g., Grok-3-mini-high) for generating context-appropriate responses. It is optimized for short, natural-language queries typical of homebuying conversations (e.g., "What incentives in Painted Tree?", "Tell me about the Jasmine floor plan"). ### Use Cases - Customer support chatbot for homebuyers - Intent routing in a conversational AI pipeline - Real-time query classification in a FastAPI-based API ### Out-of-Scope Use - General-purpose text classification outside the homebuilding domain - Response generation (model only classifies intents) - Non-English queries ## Training Data The model was fine-tuned on a custom dataset of ~600 labeled examples (~50 per intent), collected from synthetic phrases and real user transcripts from Bloomfield Homes’ chatbot interactions. The dataset (`intents.json`) includes: - **Queries**: Short, natural-language questions or statements (e.g., "What’s the warranty on a home in Copper Creek?", "Any move-in-ready homes in Lavon?"). - **Intents**: 12 categories specific to homebuilding sales, as listed above. - **Source**: - Synthetic phrases generated using synonyms, community names (e.g., "Grand Heritage"), and floor plan names (e.g., "Violet"). - Real user queries from chatbot transcripts (May 1–6 and May 19–24, 2025). - **Split**: 80% training (~480 examples), 20% validation (~120 examples), stratified by intent. - **Preprocessing**: Tokenized with `DistilBertTokenizer`, max length 128. ## Training Procedure The model was fine-tuned using the Hugging Face `transformers` library on a CPU environment. - **Base Model**: `distilbert-base-uncased` - **Hyperparameters**: - Epochs: 3 - Batch size: 8 (train and eval) - Learning rate: 5e-5 (with 50 warmup steps, linear decay) - Weight decay: 0.01 - Max sequence length: 128 - Eval strategy: Per epoch - Optimizer: AdamW - **Metrics**: - Validation accuracy: 94.69% (Epoch 3) - Validation loss: 0.384 (Epoch 3) - Training loss: 1.366 (average) - **Runtime**: ~2.3 minutes (139 seconds) for 171 steps - **Environment**: Python 3.11, `transformers==4.44.2`, `torch`, `datasets` - **Output**: Saved to `./distilbert_finetuned`, pushed to `JMcoding92/bloomfield-distilbert-finetuned` ## Evaluation Results The model was evaluated on a validation set of ~120 examples (20% of the dataset). | Epoch | Validation Accuracy | Validation Loss | |-------|---------------------|-----------------| | 1 | 78.76% | 1.826 | | 2 | 91.15% | 0.643 | | 3 | **94.69%** | **0.384** | The high accuracy indicates robust performance for the 12 intents, though the small dataset size may limit generalization to unseen query variations (e.g., misspellings). ## Usage ### Installation pip install transformers optimum[onnxruntime] torch ### Loading the Model from transformers import pipeline classifier = pipeline( "text-classification", model="JMcoding92/bloomfield-distilbert-finetuned", tokenizer="JMcoding92/bloomfield-distilbert-finetuned" ) # Example query result = classifier("What incentives in Painted Tree?") print(result) # [{'label': 'special_offers_and_incentives', 'score': 0.99}] FastAPI Integration The model is deployed in a FastAPI app (main.py) for real-time intent classification, optionally using ONNX format for efficiency. See the WTF for the API code. Copy curl -X POST "http://localhost:8000/classify" \ -H "Content-Type: application/json" \ -d '{"message": "What incentives in Painted Tree?"}' Output: {"intent": "special_offers_and_incentives"} ## Limitations Dataset Size: Trained on ~600 examples, which may not cover all query variations (e.g., misspellings like "insentives"). Domain Specificity: Optimized for Bloomfield Homes’ homebuilding context; may perform poorly on unrelated domains. Single-Turn Queries: Trained on single-turn queries; multi-turn context (e.g., conversation history) may require additional data. Language: English only. Generalization: May misclassify ambiguous or out-of-domain queries as general_query. ## Future Improvements Expand dataset with more examples, including misspellings and multi-turn queries. Incorporate conversation history for context-aware classification. Test on diverse real-world queries to improve robustness. Convert to ONNX format for faster inference (planned). ## Citation If you use this model, please cite: @misc{bloomfield_distilbert_finetuned, author = {JMcoding92}, title = {Bloomfield-DistilBERT-Finetuned: Intent Classification for Homebuilding Sales}, year = {2025}, organization = {BuilderChat} publisher = {Hugging Face}, url = {https://huggingface.co/JMcoding92/bloomfield-distilbert-finetuned} } +++ DEVELOPED WITH AI FOR AI +++ ## Contact For questions or issues, contact JMcoding92 or open an issue in the repository.
xaek08/bart-base-finetuned-ccdv-govreport
xaek08
2025-06-18T15:33:38Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "summarization", "generated_from_trainer", "dataset:ccdv/govreport-summarization", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2025-06-16T18:36:24Z
--- library_name: transformers license: apache-2.0 base_model: facebook/bart-base tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: bart-base-finetuned-ccdv-govreport results: [] datasets: - ccdv/govreport-summarization --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-ccdv-govreport This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8338 - Rouge1: 0.3117 - Rouge2: 0.1529 - Rougel: 0.2621 - Rougelsum: 0.269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 2.0154 | 1.0 | 2190 | 1.8889 | 0.2786 | 0.1373 | 0.236 | 0.2419 | | 1.5738 | 2.0 | 4380 | 1.8338 | 0.3117 | 0.1529 | 0.2621 | 0.269 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
Flickinshots/ppo-LunarLander-v2
Flickinshots
2025-06-18T15:30:15Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-18T15:29:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 249.01 +/- 16.94 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sgonzalezygil/sd-finetuning-dreambooth-v11
sgonzalezygil
2025-06-18T15:30:11Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-18T15:28:16Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vxpll/Elsa
vxpll
2025-06-18T15:28:12Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-18T15:27:37Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/photo_2025-06-18_18-19-32.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: elsa --- # Elsa <Gallery /> ## Trigger words You should use `elsa` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/vxpll/Elsa/tree/main) them in the Files & versions tab.
Slaiwala/askstein-lora
Slaiwala
2025-06-18T15:23:21Z
0
1
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-18T15:23:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sharing22/aaa_c7
Sharing22
2025-06-18T15:22:52Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T15:17:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sanchit42/qwen3-8B-instruct-29reports-lora256
sanchit42
2025-06-18T15:20:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T15:17:36Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VIPL-GENUN/Jodi
VIPL-GENUN
2025-06-18T15:11:37Z
40
6
null
[ "Diffusion", "Text-to-Image", "Controllable-Generation", "Image-Perception", "arxiv:2505.19084", "base_model:Efficient-Large-Model/Sana_1600M_1024px_BF16", "base_model:finetune:Efficient-Large-Model/Sana_1600M_1024px_BF16", "region:us" ]
null
2025-05-24T06:23:04Z
--- base_model: - Efficient-Large-Model/Sana_1600M_1024px_BF16 - VIPL-GENUN/Jodi tags: - Diffusion - Text-to-Image - Controllable-Generation - Image-Perception --- # Jodi We introduce Jodi, a diffusion framework that unifies visual generation and understanding by jointly modeling the image domain and multiple label domains. - **arXiv**: <https://arxiv.org/abs/2505.19084> - **Project page**: <https://VIPL-GENUN.github.io/Project-Jodi> - **GitHub**: <https://github.com/VIPL-GENUN/Jodi> - **Joint-1.6M Dataset**: <https://huggingface.co/datasets/VIPL-GENUN/Joint-1.6M-1024px> ![](./assets/banner.jpg) <br> # Citation If you find this project helpful, please consider citing: ```bibtex @article{xu2025jodi, title={Jodi: Unification of Visual Generation and Understanding via Joint Modeling}, author={Xu, Yifeng and He, Zhenliang and Kan, Meina and Shan, Shiguang and Chen, Xilin}, journal={arXiv preprint arXiv:2505.19084}, year={2025} } ```
gradientrouting-spar/mc9_badmed_representation_constraint_beta_kl-1000.0_seed_1
gradientrouting-spar
2025-06-18T15:08:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-18T15:07:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
indicinaaa/Qwen3-finNER-8B-fp4
indicinaaa
2025-06-18T15:08:28Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-18T14:53:03Z
--- base_model: unsloth/qwen3-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** indicinaaa - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
QuantFactory/Foundation-Sec-8B-GGUF
QuantFactory
2025-06-18T14:58:12Z
0
1
transformers
[ "transformers", "gguf", "security", "text-generation", "en", "arxiv:2504.21039", "base_model:meta-llama/Llama-3.1-8B", "base_model:quantized:meta-llama/Llama-3.1-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T11:31:03Z
--- base_model: - meta-llama/Llama-3.1-8B language: - en library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - security --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Foundation-Sec-8B-GGUF This is quantized version of [fdtn-ai/Foundation-Sec-8B](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) created using llama.cpp # Original Model Card # Foundation-Sec-8B - Model Card ## Model Information Foundation-Sec-8B (Llama-3.1-FoundationAI-SecurityLLM-base-8B) is an open-weight, 8-billion parameter base language model specialized for cybersecurity applications. It extends Llama-3.1-8B model through continued pretraining on a curated corpus of cybersecurity-specific text, including threat intelligence reports, vulnerability databases, incident response documentation, and security standards. It has been trained to understand security concepts, terminology, and practices across multiple security domains. The model is designed to serve as a domain-adapted base model for use in applications such as threat detection, vulnerability assessment, security automation, and attack simulation. Foundation-Sec-8B enables organizations to build AI-driven security tools that can be deployed locally, reducing dependency on cloud-based AI services while maintaining high performance on security-related tasks. - **Model Name:** Foundation-Sec-8B (Llama-3.1-FoundationAI-SecurityLLM-base-8B) - **Model Developer:** Amin Karbasi and team at Foundation AI — Cisco - **Technical Report:** [`https://arxiv.org/abs/2504.21039`](https://arxiv.org/abs/2504.21039) - **Model Card Contact:** For questions about the team, model usage, and future directions, contact [`[email protected]`](mailto:[email protected]). For technical questions about the model, please contact [`[email protected]`](mailto:[email protected]). - **Model Release Date:** April 28, 2025 - **Supported Language(s):** English - **Model Architecture:** Auto-regressive language model that uses an optimized transformer architecture (Meta Llama-3.1-8B backbone) - **Training Objective:** Continued pre-training on cybersecurity-specific corpus - **Training Data Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released on updated data. - **License:** Apache 2.0 ## Intended Use ### Intended Use Cases Foundation-Sec-8B is designed for security practitioners, researchers, and developers building AI-powered security workflows and applications. Foundation-Sec-8B is optimized for three core use case categories: - **SOC Acceleration**: Automating triage, summarization, case note generation, and evidence collection. - **Proactive Threat Defense**: Simulating attacks, prioritizing vulnerabilities, mapping TTPs, and modeling attacker behavior. - **Engineering Enablement**: Providing security assistance, validating configurations, assessing compliance evidence, and improving security posture. The model is intended for local deployment in environments prioritizing data security, regulatory compliance, and operational control. ### Downstream Use Foundation-Sec-8B can be used directly for security-related language tasks and serves as a strong starting point for fine-tuning across a variety of cybersecurity workflows. Example downstream applications include: - Summarization - Summarizing detection playbooks and incident reports - Consolidating fragmented analyst notes into structured case summaries - Classification - Mapping threats to MITRE ATT&CK techniques - Prioritizing vulnerabilities based on contextual risk - Classifying security-relevant emails and leaked file contents - Named Entity Recognition - Extracting compliance evidence from documents - Building network behavior profiles from technical manuals - Question & Answer - Assisting SOC analysts with alert triage and investigation - Responding to cloud security and software compliance queries - Reasoning and Text Generation - Generating red-team attack plans and threat models - Predicting attacker next steps in active investigations - Enriching vulnerability scan results with contextual insights For questions or assistance with fine-tuning Foundation-Sec-8B, please contact **Paul Kassianik** ([email protected]) or **Dhruv Kedia** ([email protected]). ### Out-of-Scope Use The following uses are out-of-scope and are neither recommended nor intended use cases: 1. **Generating harmful content** - The model should not be used to: - Generate malware or other malicious code - Create phishing content or social engineering scripts - Develop attack plans targeting specific organizations - Design exploitation techniques for vulnerabilities without legitimate security research purposes 2. **Critical security decisions without human oversight** - The model should not be used for: - Autonomous security decision-making without human review - Critical infrastructure protection without expert supervision - Final determination of security compliance without human verification - Autonomous vulnerability remediation without testing 3. **Legal or medical advice** - The model is not qualified to provide: - Legal advice regarding security regulations, compliance requirements, or intellectual property disputes - Legal advice regarding security issues that would reference legal statutes, precedents, or case law necessary to provide legal advice - Medical advice regarding health impacts of security incidents 4. **Non-security use cases** - The model is specifically optimized for cybersecurity and may not perform as well on general tasks as models trained for broader applications. 5. **Violation of Laws or Regulations** - Any use that violates applicable laws or regulations. ## How to Get Started with the Model Use the code below to get started with the model. ```python # Import the required libraries import torch from transformers import AutoTokenizer, AutoModelForCausalLM # Load the model and tokenizer tokenizer = AutoTokenizer.from_pretrained("fdtn-ai/Foundation-Sec-8B") model = AutoModelForCausalLM.from_pretrained("fdtn-ai/Foundation-Sec-8B") # Example: Matching CWE to CVE IDs prompt="""CVE-2021-44228 is a remote code execution flaw in Apache Log4j2 via unsafe JNDI lookups (“Log4Shell”). The CWE is CWE-502. CVE-2017-0144 is a remote code execution vulnerability in Microsoft’s SMBv1 server (“EternalBlue”) due to a buffer overflow. The CWE is CWE-119. CVE-2014-0160 is an information-disclosure bug in OpenSSL’s heartbeat extension (“Heartbleed”) causing out-of-bounds reads. The CWE is CWE-125. CVE-2017-5638 is a remote code execution issue in Apache Struts 2’s Jakarta Multipart parser stemming from improper input validation of the Content-Type header. The CWE is CWE-20. CVE-2019-0708 is a remote code execution vulnerability in Microsoft’s Remote Desktop Services (“BlueKeep”) triggered by a use-after-free. The CWE is CWE-416. CVE-2015-10011 is a vulnerability about OpenDNS OpenResolve improper log output neutralization. The CWE is""" # Tokenize the input inputs = tokenizer(prompt, return_tensors="pt") # Generate the response outputs = model.generate( inputs["input_ids"], max_new_tokens=3, do_sample=True, temperature=0.1, top_p=0.9, ) # Decode and print the response response = tokenizer.decode(outputs[0], skip_special_tokens=True) response = response.replace(prompt, "").strip() print(response) ``` ## Training and Evaluation ### Training Data Foundation-sec-8B was pretrained on approximately **5.1 billion tokens** of cybersecurity-specific data curated in-house by Cisco’s Foundation AI team. The dataset was meticulously collected from public sources on the web. The pre-training corpus was built through a multi-stage pipeline that included large-scale web crawling, relevancy filtering, deduplication, and quality filtering. **Data cutoff:** April 10th, 2025. More detailed methodology is available in the technical report. ### Training Setup Foundation-sec-8B is based on the **Llama 3.1 8B** architecture. Pre-training was performed on Cisco Foundation AI’s internal compute cluster. Key training details: - **Continued pretraining** for cybersecurity specialization - **4096-token** sequence length - **Optimizer:** AdamW More detailed methodology is available in the technical report. ### Evaluation Foundation-sec-8B was benchmarked on cybersecurity and general reasoning tasks, using a standardized 5-shot prompting setup (temperature = 0.3). | **Benchmark** | **Foundation-sec-8B** | **Llama 3.1 8B** | **Llama 3.1 70B** | | --- | --- | --- | --- | | CTI-MCQA | 67.39 | 64.14 | 68.23 | | CTI-RCM | 75.26 | 66.43 | 72.66 | **Benchmark Overview:** - **CTI-MCQA:** 2,500 multiple-choice questions testing cybersecurity knowledge across frameworks like MITRE ATT&CK, NIST, GDPR, and threat intelligence best practices. - **CTI-RCM:** 900+ vulnerability root cause mapping examples linking CVEs to CWE categories, assessing deep understanding of security weaknesses. **Key highlights:** - **+3 to +9 point gains** over Llama-3.1-8B across security-specific benchmarks. - **Comparable or better** performance than Llama-3.1-70B on cyber threat intelligence tasks. - **Minimal drop (~2%)** in general language reasoning (MMLU) despite cybersecurity specialization. For full benchmark details and evaluation methodology, please refer to the technical report. ## Limitations Foundation-Sec-8B has several limitations that users should be aware of: 1. **Domain-specific knowledge limitations**: - Foundation-Sec-8B may not be familiar with recent vulnerabilities, exploits, or novel attack vectors or security technologies released after its training cutoff date - Knowledge of specialized or proprietary security systems or tools may be limited 2. **Potential biases**: - The model may reflect biases present in security literature and documentation - The model may be trained on known attack patterns and have difficulty recognizing novel attack vectors - Security practices and recommendations may be biased toward certain technological ecosystems - Geographic and cultural biases in security approaches may be present 3. **Security risks**: - The model cannot verify the identity or intentions of users - Adversarial prompting techniques might potentially bypass safety mechanisms - The model may unintentionally provide information that could be misused if proper prompting guardrails are not implemented 4. **Contextual blindness:** - The model may struggle to understand the complex interrelationships between systems, users, and data in order to provide accurate context. 5. **Technical limitations**: - Performance varies based on how security concepts are described in prompts - May not fully understand complex, multi-step security scenarios without clear explanation - Cannot access external systems or actively scan environments - Cannot independently verify factual accuracy of its outputs 6. **Ethical considerations**: - Dual-use nature of security knowledge requires careful consideration of appropriate use cases ### Recommendations To address the limitations of Foundation-Sec-8B, we recommend: 1. **Human oversight**: - Always have qualified security professionals review model outputs before implementation - Use the model as an assistive tool rather than a replacement for expert human judgment - Implement a human-in-the-loop approach for security-critical applications 2. **System design safeguards**: - Implement additional validation layers for applications built with this model - Consider architectural constraints that limit the model's ability to perform potentially harmful actions (excessive agency) - Deploy the model in environments with appropriate access controls 3. **Prompt engineering**: - Use carefully designed prompts that encourage ethical security practices - Include explicit instructions regarding responsible disclosure and ethical hacking principles - Structure interactions to minimize the risk of inadvertently harmful outputs 4. **Knowledge supplementation**: - Supplement the model with up-to-date security feeds and databases - Implement retrieval-augmented generation for current threat intelligence sources 5. **Usage policies**: - Develop and enforce clear acceptable use policies for applications using this model - Implement monitoring and auditing for high-risk applications - Create documentation for end users about the model's limitations
racineai/Flantier-SmolVLM-2B-dse
racineai
2025-06-18T14:57:31Z
625
9
null
[ "safetensors", "idefics3", "fr", "en", "de", "es", "it", "dataset:racineai/OGC_2_vdr-visRAG-colpali", "base_model:HuggingFaceTB/SmolVLM-Instruct", "base_model:finetune:HuggingFaceTB/SmolVLM-Instruct", "license:apache-2.0", "region:us" ]
null
2025-03-26T15:49:48Z
--- license: apache-2.0 datasets: - racineai/OGC_2_vdr-visRAG-colpali language: - fr - en - de - es - it base_model: - HuggingFaceTB/SmolVLM-Instruct --- # Flantier-SmolVLM-2B-dse A lightweight multimodal vision-language model specialized for technical document retrieval. ## Overview Flantier-SmolVLM-2B-dse (Document Screenshot Embedding) is a 2B parameter vision-language model designed for efficient retrieval of technical documentation. It directly encodes document screenshots into embeddings, preserving all information including text, images, and layout without requiring separate content extraction. ## Key Features - **Efficient Retrieval**: Generates document and query embeddings for semantic similarity search - **Multimodal Understanding**: Processes text, diagrams, charts, and tables in their original layout - **Lightweight Architecture**: Only 2B parameters, runs on consumer GPUs - **No Preprocessing Required**: Directly works with document screenshots ## Installation ```bash pip install transformers accelerate pillow ``` ## Usage Example ```python from PIL import Image import torch from transformers import AutoProcessor, AutoModelForVision2Seq # Load model and processor processor = AutoProcessor.from_pretrained("racineai/Flantier-SmolVLM-2B-dse") model = AutoModelForVision2Seq.from_pretrained( "racineai/Flantier-SmolVLM-2B-dse", torch_dtype=torch.bfloat16, device_map="auto" ) # Load document image document_image = Image.open("technical_document.jpg") # Process for document embedding doc_messages = [ { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "What is shown in this image?"} ] }, ] doc_prompt = processor.apply_chat_template(doc_messages, add_generation_prompt=True) doc_inputs = processor(text=doc_prompt, images=[document_image], return_tensors="pt").to(model.device) # Generate document embedding with torch.no_grad(): doc_outputs = model(**doc_inputs, output_hidden_states=True, return_dict=True) doc_embedding = doc_outputs.hidden_states[-1][:, -1] # Last token embedding doc_embedding = torch.nn.functional.normalize(doc_embedding, p=2, dim=-1) # Process query embedding query = "What are the specifications of this component?" query_messages = [ { "role": "user", "content": [ {"type": "text", "text": query} ] }, ] query_prompt = processor.apply_chat_template(query_messages, add_generation_prompt=True) query_inputs = processor(text=query_prompt, return_tensors="pt").to(model.device) # Generate query embedding with torch.no_grad(): query_outputs = model(**query_inputs, output_hidden_states=True, return_dict=True) query_embedding = query_outputs.hidden_states[-1][:, -1] # Last token embedding query_embedding = torch.nn.functional.normalize(query_embedding, p=2, dim=-1) # Calculate similarity similarity = torch.nn.functional.cosine_similarity(query_embedding, doc_embedding) print(f"Similarity score: {similarity.item():.4f}") ``` ## Applications - **Technical Document Retrieval**: Find relevant documents based on technical queries - **Technical Support Systems**: Match user questions to relevant documentation - **Engineering Knowledge Management**: Index and search technical specifications, diagrams, and reports ## Training Methodology This model was trained using the Document Screenshot Embedding (DSE) approach, which treats document screenshots as a unified input format. This eliminates the need for content extraction preprocessing while preserving all visual and textual information in documents. ## Citation ``` @misc{flantier-smolvlm-dse, author = {racine.ai}, title = {Flantier-SmolVLM-2B-dse: A Lightweight Document Screenshot Embedding Model}, year = {2025}, publisher = {Hugging Face}, url = {https://huggingface.co/racineai/Flantier-SmolVLM-2B-dse} } ``` ## License This model is released under the Apache 2.0 license.