modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1_v2
ahmedelgebaly
2025-05-26T08:51:09Z
19
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:meta-llama/Llama-3.1-8B", "base_model:adapter:meta-llama/Llama-3.1-8B", "license:llama3", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-26T17:42:50Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: llama-3.1-8b-squadv2_SciQ_e1_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used peft_model: ahmedelgebaly/llama-3.1-8b-squadv2 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: true strict: false datasets: - path: ahmedelgebaly/SciQ_Alpaca type: alpaca split: train test_datasets: - path: ahmedelgebaly/SciQ_Alpaca type: alpaca split: validation dataset_prepared_path: output_dir: ./outputs/qlora-out adapter: qlora lora_model_dir: sequence_len: 2048 # Halves memory usage decreasing from 4096 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true lora_r: 64 # Increased from 32 lora_alpha: 32 # Increased from 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: llama-3.1-8b-squadv2_SciQ_e1_v2 wandb_entity: wandb_watch: wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e1_v2 wandb_log_model: hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1_v2 gradient_accumulation_steps: 32 # Keeps effective batch size=64 (2x32) micro_batch_size: 2 # Decrreses from 4 num_epochs: 1 optimizer: paged_adamw_32bit lr_scheduler: cosine_with_restarts # Updated learning_rate: 0.0001 # Reduced from 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 # Increased from 10 evals_per_epoch: 4 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: "<|end_of_text|>" ``` </details><br> # llama-3.1-8b-squadv2_SciQ_e1_v2 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8006 | 0.0598 | 1 | 1.8330 | | 1.7825 | 0.2393 | 4 | 1.8315 | | 1.7629 | 0.4785 | 8 | 1.8140 | | 1.6663 | 0.7178 | 12 | 1.7312 | | 1.5168 | 0.9570 | 16 | 1.5100 | ### Framework versions - PEFT 0.13.2 - Transformers 4.45.2 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e3
ahmedelgebaly
2025-05-26T08:50:39Z
14
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:meta-llama/Llama-3.1-8B", "base_model:adapter:meta-llama/Llama-3.1-8B", "license:llama3", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-25T14:03:05Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: llama-3.1-8b-squadv2_SciQ_e3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used # Load your previously fine-tuned model as a PEFT adapter peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_e3 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: true strict: false datasets: - path: ahmedelgebaly/SciQ_Alpaca type: alpaca split: train test_datasets: - path: ahmedelgebaly/SciQ_Alpaca type: alpaca split: validation dataset_prepared_path: output_dir: ./outputs/qlora-out adapter: qlora lora_model_dir: sequence_len: 4096 sample_packing: true pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: llama-3.1-8b-squadv2_SciQ_e3 wandb_entity: wandb_watch: wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e3 wandb_log_model: hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e3 gradient_accumulation_steps: 4 micro_batch_size: 4 num_epochs: 3 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: "<|end_of_text|>" ``` </details><br> # llama-3.1-8b-squadv2_SciQ_e3 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7866 | 0.0305 | 1 | 1.8420 | | 1.1314 | 0.2443 | 8 | 1.0979 | | 0.8408 | 0.4885 | 16 | 0.9646 | | 0.8669 | 0.7328 | 24 | 0.9339 | | 0.8588 | 0.9771 | 32 | 0.9197 | | 0.8363 | 1.2137 | 40 | 0.9090 | | 0.8021 | 1.4580 | 48 | 0.9028 | | 0.833 | 1.7023 | 56 | 0.8995 | | 0.8083 | 1.9466 | 64 | 0.8951 | | 0.8215 | 2.1832 | 72 | 0.8948 | | 0.824 | 2.4275 | 80 | 0.8945 | | 0.802 | 2.6718 | 88 | 0.8936 | | 0.7762 | 2.9160 | 96 | 0.8935 | ### Framework versions - PEFT 0.13.2 - Transformers 4.45.2 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
Mihaj/whisper-medium-karelian-cs-w-rus
Mihaj
2025-05-26T08:49:41Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-20T09:38:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BlinkDL/rwkv-8-pile
BlinkDL
2025-05-26T08:49:36Z
0
2
null
[ "dataset:EleutherAI/pile", "license:apache-2.0", "region:us" ]
null
2025-05-26T02:11:19Z
--- license: apache-2.0 datasets: - EleutherAI/pile --- RWKV-8 trained on the Pile w/ "20b tokenizer" (332115325534 tokens) Here are early testing versions and I will iterate How to run it: https://github.com/BlinkDL/RWKV-LM/tree/main/RWKV-v7
igzi/MNLP_document_encoder-finetuned
igzi
2025-05-26T08:48:30Z
0
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-26T08:48:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phospho-app/jmota27-ACT-boat_cup_dataset-x65e4
phospho-app
2025-05-26T08:48:06Z
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-05-26T06:27:35Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [jmota27/boat_cup_dataset](https://huggingface.co/datasets/jmota27/boat_cup_dataset) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 60 - **Training steps**: 8000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1
ahmedelgebaly
2025-05-26T08:47:48Z
10
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:meta-llama/Llama-3.1-8B", "base_model:adapter:meta-llama/Llama-3.1-8B", "license:llama3", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-25T13:20:17Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: llama-3.1-8b-squadv2_SciQ_e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used # Load your previously fine-tuned model as a PEFT adapter peft_model: ahmedelgebaly/llama-3.1-8b-squadv2 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: true strict: false datasets: - path: ahmedelgebaly/SciQ_Alpaca type: alpaca split: train test_datasets: - path: ahmedelgebaly/SciQ_Alpaca type: alpaca split: validation dataset_prepared_path: output_dir: ./outputs/qlora-out adapter: qlora lora_model_dir: sequence_len: 4096 sample_packing: true pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: llama-3.1-8b-squadv2_SciQ_e1 wandb_entity: wandb_watch: wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e1 wandb_log_model: hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1 gradient_accumulation_steps: 4 micro_batch_size: 4 num_epochs: 1 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: "<|end_of_text|>" ``` </details><br> # llama-3.1-8b-squadv2_SciQ_e1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9369 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7866 | 0.0305 | 1 | 1.8420 | | 1.1313 | 0.2443 | 8 | 1.0968 | | 0.841 | 0.4885 | 16 | 0.9655 | | 0.8722 | 0.7328 | 24 | 0.9415 | | 0.8736 | 0.9771 | 32 | 0.9369 | ### Framework versions - PEFT 0.13.2 - Transformers 4.45.2 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
bigband/ProteanEreshkigal
bigband
2025-05-26T08:42:28Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T08:32:00Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
serag-ai/Finetuned_DDI_Gemma
serag-ai
2025-05-26T08:41:39Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "endpoints_compatible", "region:us" ]
null
2025-05-16T14:10:35Z
--- library_name: transformers tags: - unsloth ---
Lonz1no/Qwen3_Rude_RAG
Lonz1no
2025-05-26T06:27:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-26T06:27:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cleathley-dapth/bert-phishing-classifier-teacher
cleathley-dapth
2025-05-26T06:26:11Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-26T06:24:10Z
--- library_name: transformers license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-phishing-classifier-teacher results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-phishing-classifier-teacher This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2894 - Accuracy: 0.878 - Auc: 0.951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:| | 0.5004 | 1.0 | 263 | 0.3824 | 0.811 | 0.909 | | 0.3798 | 2.0 | 526 | 0.3567 | 0.833 | 0.934 | | 0.3914 | 3.0 | 789 | 0.3284 | 0.838 | 0.943 | | 0.3755 | 4.0 | 1052 | 0.4358 | 0.809 | 0.941 | | 0.3415 | 5.0 | 1315 | 0.3250 | 0.864 | 0.945 | | 0.3378 | 6.0 | 1578 | 0.3317 | 0.864 | 0.946 | | 0.32 | 7.0 | 1841 | 0.2918 | 0.882 | 0.948 | | 0.3321 | 8.0 | 2104 | 0.2912 | 0.882 | 0.95 | | 0.3102 | 9.0 | 2367 | 0.2868 | 0.873 | 0.951 | | 0.3186 | 10.0 | 2630 | 0.2894 | 0.878 | 0.951 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.7.0+cu118 - Datasets 3.6.0 - Tokenizers 0.21.1
green19d25y/gpt2-23m-hf
green19d25y
2025-05-26T06:25:30Z
0
0
null
[ "safetensors", "gpt2", "text-generation", "en", "license:mit", "region:us" ]
text-generation
2025-05-26T06:09:58Z
--- license: mit language: - en pipeline_tag: text-generation --- # GPT2 HF Model (23M Parameters) This is a **GPT-2 architecture model** trained **completely from scratch** with **23 million parameters**. It uses a custom tokenizer and vocabulary, and is designed for experimentation with compact, task-specific language models. ## Training Details - **Architecture**: GPT-2 - **Parameters**: 23M - **Training from scratch**: Yes - **Pretrained base**: None - **Tokenizer**: ByteLevelBPETokenizer - **Vocabulary size**: 5K tokens - **Language**: English only - **Dataset**: https://www.gutenberg.org/ebooks/100 ## Purpose I want to check if I can train a model with just a few vocabulary tokens, a small embedding size, and limited data. Right now, it doesn't perform as well as I expected, but I will release a much better-trained model soon. ## Intended Use - Small-scale research - Testing text generation on limited data - Fine-grained experimentation with custom language models - Educational purposes ## Limitations - Not general-purpose - Limited vocabulary and context length - Struggles outside its trained domain - English-only - Not production-ready ## Inference Example ```python from transformers import GPT2LMHeadModel, GPT2Tokenizer model = GPT2LMHeadModel.from_pretrained("green19d25y/gpt2-23m-hf") tokenizer = GPT2Tokenizer.from_pretrained("green19d25y/gpt2-23m-hf") prompt = "He had need mean better than his" input_ids = tokenizer.encode(prompt, return_tensors="pt") output = model.generate( input_ids, max_length=50, num_return_sequences=1, do_sample=True, temperature=0.7 ) generated_text = tokenizer.decode(output[0], skip_special_tokens=True) print(generated_text) ```
rendoo/08_rendoo_06_6574
rendoo
2025-05-26T06:23:46Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T06:14:16Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
soob3123/GrayLine-Qwen3-14B-Planner
soob3123
2025-05-26T06:23:31Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "feature-extraction", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-26T06:23:10Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** soob3123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Yuichi1218/Lafeak-llama3-chatvector-05261128
Yuichi1218
2025-05-26T06:23:18Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-26T06:15:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
D-Cajiao/Text-2-SQL
D-Cajiao
2025-05-26T06:16:28Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:support-pvelocity/Code-Llama-2-7B-instruct-text2sql", "base_model:adapter:support-pvelocity/Code-Llama-2-7B-instruct-text2sql", "region:us" ]
null
2025-05-26T06:13:02Z
--- base_model: support-pvelocity/Code-Llama-2-7B-instruct-text2sql library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
cleathley-dapth/bert-phishing-classifier_teacher
cleathley-dapth
2025-05-26T06:14:34Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-26T04:59:32Z
--- library_name: transformers license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-phishing-classifier_teacher results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-phishing-classifier_teacher This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2894 - Accuracy: 0.878 - Auc: 0.951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:| | 0.5027 | 1.0 | 263 | 0.3880 | 0.804 | 0.909 | | 0.3762 | 2.0 | 526 | 0.3613 | 0.836 | 0.933 | | 0.3932 | 3.0 | 789 | 0.3247 | 0.842 | 0.942 | | 0.3791 | 4.0 | 1052 | 0.4613 | 0.804 | 0.941 | | 0.3409 | 5.0 | 1315 | 0.3251 | 0.864 | 0.944 | | 0.3368 | 6.0 | 1578 | 0.3309 | 0.869 | 0.946 | | 0.3197 | 7.0 | 1841 | 0.2927 | 0.876 | 0.948 | | 0.3329 | 8.0 | 2104 | 0.2908 | 0.882 | 0.949 | | 0.3101 | 9.0 | 2367 | 0.2864 | 0.873 | 0.95 | | 0.3195 | 10.0 | 2630 | 0.2894 | 0.878 | 0.951 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.7.0+cu118 - Datasets 3.6.0 - Tokenizers 0.21.1
TheGardener/Llama-0.7B-shortened-llama
TheGardener
2025-05-26T06:13:30Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T06:11:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Rosh03/Reasoning_finetuned_llama
Rosh03
2025-05-26T06:12:37Z
0
0
peft
[ "peft", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:reasonir/reasonir-data", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-3B-Instruct", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-25T22:32:39Z
--- base_model: - meta-llama/Llama-3.2-3B-Instruct library_name: peft license: mit datasets: - reasonir/reasonir-data language: - en pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Rosh Jaison - **Funded by [optional]:** None - **Shared by [optional]:** None - **Model type:** Text-Generation - **Language(s) (NLP):En(English) - **License:** [More Information Needed] - **Finetuned from model [optional]:meta-llama/Llama-3.2-3B-Instruct ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
Centk/task-9-google-gemma-2b
Centk
2025-05-26T06:11:31Z
655
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
2025-05-10T09:19:20Z
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
liudeli/test_grpo
liudeli
2025-05-26T06:10:26Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T06:06:58Z
--- base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** liudeli - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ibuki95/model3
ibuki95
2025-05-26T06:10:11Z
0
0
null
[ "region:us" ]
null
2025-05-26T06:06:05Z
# Container Template for SoundsRight Subnet Miners This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively. This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed. To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt. Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html). Verify that the CDI specification was done correctly with: ``` $ nvidia-ctk cdi list ``` You should see this in your output: ``` nvidia.com/gpu=all nvidia.com/gpu=0 ``` If you are running podman as root, run the following command to start the container: Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` If you are running the container rootless, there are a few more changes to make: First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters: ``` [nvidia-container-cli] no-cgroups = true [nvidia-container-runtime] debug = "/tmp/nvidia-container-runtime.log" ``` You can also run the following command to achieve the same result: ``` $ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place ``` Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` Running the container will spin up an API with the following endpoints: 1. `/status/` : Communicates API status 2. `/prepare/` : Download model checkpoint and initialize model 3. `/upload-audio/` : Upload audio files, save to noisy audio directory 4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory 5. `/download-enhanced/` : Download enhanced audio files By default the API will use host `0.0.0.0` and port `6500`. ### References 1. **Welker, Simon; Richter, Julius; Gerkmann, Timo** *Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*. Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932. [DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653) 2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo** *Speech Enhancement and Dereverberation with Diffusion-based Generative Models*. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364. [DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241) 3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo** *EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*. Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
Clybius/Chroma-fp8-scaled
Clybius
2025-05-26T06:08:41Z
0
46
pytorch
[ "pytorch", "text-to-image", "base_model:lodestones/Chroma", "base_model:finetune:lodestones/Chroma", "license:apache-2.0", "region:us" ]
text-to-image
2025-03-20T01:01:06Z
--- license: apache-2.0 base_model: - lodestones/Chroma pipeline_tag: text-to-image library_name: pytorch --- # Chroma FP8 Scaled ## Model Details - **Model Type**: Scaled FP8 safetensors variant of Lodestone Rock's [Chroma](https://huggingface.co/lodestones/Chroma) model - **Model Architecture**: Chroma architecture, with FP8 scaling ## Model Description Chroma FP8 Scaled is a high-precision variant of the Chroma model, utilizing the full dynamic range of FP8 (-448 to 448). This model leverages the large headroom available in FP8 format to maintain higher precision compared to standard FP8 safetensors, resulting in improved performance while maintaining the benefits of reduced model size. ## Hardware and Software Requirements - **Dependencies**: Requires an up-to-date ComfyUI as of May 1, 2025. ## Installation and Usage ``` # Load the model using `Load Diffusion Model` in ComfyUI # Set weight_dtype to `default` ``` ## Acknowledgments Thanks to Lodestone Rock for creating the original Chroma model and developing the FluxMod toolkit that enables this optimized FP8 representation.
ibuki95/model2
ibuki95
2025-05-26T06:08:35Z
0
0
null
[ "region:us" ]
null
2025-05-26T06:04:40Z
# Container Template for SoundsRight Subnet Miners This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively. This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed. To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt. Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html). Verify that the CDI specification was done correctly with: ``` $ nvidia-ctk cdi list ``` You should see this in your output: ``` nvidia.com/gpu=all nvidia.com/gpu=0 ``` If you are running podman as root, run the following command to start the container: Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` If you are running the container rootless, there are a few more changes to make: First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters: ``` [nvidia-container-cli] no-cgroups = true [nvidia-container-runtime] debug = "/tmp/nvidia-container-runtime.log" ``` You can also run the following command to achieve the same result: ``` $ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place ``` Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` Running the container will spin up an API with the following endpoints: 1. `/status/` : Communicates API status 2. `/prepare/` : Download model checkpoint and initialize model 3. `/upload-audio/` : Upload audio files, save to noisy audio directory 4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory 5. `/download-enhanced/` : Download enhanced audio files By default the API will use host `0.0.0.0` and port `6500`. ### References 1. **Welker, Simon; Richter, Julius; Gerkmann, Timo** *Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*. Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932. [DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653) 2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo** *Speech Enhancement and Dereverberation with Diffusion-based Generative Models*. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364. [DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241) 3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo** *EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*. Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
Amoros/Amoros_Beaugosse_test-large-2025_05_26_36270-bs64_freeze
Amoros
2025-05-26T06:04:52Z
0
0
null
[ "tensorboard", "hf-summary-writer", "region:us" ]
null
2025-05-26T06:04:49Z
--- tags: - hf-summary-writer ---
surikaaaa/ffff
surikaaaa
2025-05-26T06:03:31Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-26T06:03:31Z
--- license: creativeml-openrail-m ---
mohamedbilal1496/nanoVLM_ocr_test
mohamedbilal1496
2025-05-26T06:03:27Z
0
0
nanovlm
[ "nanovlm", "safetensors", "vision-language", "multimodal", "research", "image-text-to-text", "license:mit", "region:us" ]
image-text-to-text
2025-05-26T04:25:00Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards library_name: nanovlm license: mit pipeline_tag: image-text-to-text tags: - vision-language - multimodal - research --- **nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model. For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M. **Usage:** Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM. Follow the install instructions and run the following code: ```python from models.vision_language_model import VisionLanguageModel model = VisionLanguageModel.from_pretrained("mohamedbilal1496/nanoVLM_ocr_test") ```
Wuhall/xlm-roberta-base-cls
Wuhall
2025-05-26T06:03:10Z
0
0
null
[ "safetensors", "xlm-roberta", "zh", "en", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "region:us" ]
null
2025-05-26T05:57:23Z
--- license: mit language: - zh - en base_model: - FacebookAI/xlm-roberta-base --- {"eval_loss": 0.02062925696372986, "eval_accuracy": 0.9971910112359551, "eval_runtime": 9.3475, "eval_samples_per_second": 76.17, "eval_steps_per_second": 4.814, "epoch": 4.0}
barandinho/aya-expanse-32b-turkish-reasoning-sft
barandinho
2025-05-26T05:57:55Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-26T05:57:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
root-jlee/Reinforce-pixelcopter-basic
root-jlee
2025-05-26T05:53:22Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-05-26T05:53:19Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter-basic results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 17.80 +/- 14.90 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
vickt/gemma3-27b-it-zh2en
vickt
2025-05-26T05:52:00Z
31
0
null
[ "safetensors", "gemma3_text", "translation", "taiwan", "zh-en", "en-zh", "vllm", "bitsandbytes", "gemma3", "zh", "en", "license:apache-2.0", "region:us" ]
translation
2025-04-17T07:08:51Z
--- license: apache-2.0 language: - zh - en tags: - translation - taiwan - zh-en - en-zh - vllm - bitsandbytes - gemma3 --- # gemma3-27b-it-zh2en 翻譯模型(英文 ⇄ 繁體中文) 英文對繁體中文翻譯以及繁體中文對英文翻譯的基底模型為 [OpenPipe/gemma-3-27b-it-text-only](https://huggingface.co/OpenPipe/gemma-3-27b-it-text-only),該模型是以 gemma-3-27b-it 為基礎,僅使用其文字部分進行訓練,並不具備圖片上傳功能。此外,模型訓練時已刻意避開中國用語,因此輸出的翻譯內容皆偏向台灣用語。 ## VLLM 啟動在 24G 卡 ```python export VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 python -m vllm.entrypoints.openai.api_server \ --model vickt/gemma3-27b-it-zh2en \ --gpu-memory-utilization 1 \ --max-model-len 9000 \ --port 8053 \ --enforce-eager \ --quantization bitsandbytes \ --load-format bitsandbytes \ --enable_prefix_caching ``` ## API 呼叫 ```python import requests MODEL = "vickt/gemma3-27b-it-zh2en" API_URL = "你的URL" HEADERS = {"Content-Type": "application/json"} input="""英文對繁體中文翻譯以及中文對英文翻譯""" res_args = { "model": MODEL, "messages": [ # 翻譯成英文 {"role":"system","content":f"""請逐字翻譯以下文本為英文,保留原始格式與換行,直接回傳翻譯結果,無需任何補充說明,年份請使用西元年。"""}, # 翻譯成中文 # {"role":"system","content":f"""請逐字翻譯以下文本為中文,保留原始格式與換行,直接回傳翻譯結果,無需任何補充說明。"""}, {"role": "user", "content": f"{input}"} ], "temperature": 0, } response = requests.post( API_URL, json=res_args, headers=HEADERS, ).json()['choices'][0]['message']['content'] print(response) ``` ## 分數 以自行收集的**台灣新聞**繁體中英文翻譯資料集,並以雙向中譯英與英譯中兩種任務,採用 ROUGE 指標進行翻譯品質評估。 | 模型名稱 | 量化類型 | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-Lsum | |----------------------------------------------------------------------|--------------|---------|---------|---------|-------------| | vickt/gemma3-27b-it-zh2en | INT4 | **0.5449** | **0.3115** | **0.4472** | **0.4678** | | [OpenPipe/gemma-3-27b-it-text-only](https://huggingface.co/OpenPipe/gemma-3-27b-it-text-only) | INT4 | 0.5057 | 0.2735 | 0.4110 | 0.4364 | | [microsoft/phi-4](https://huggingface.co/microsoft/phi-4) | INT4 | 0.4674 | 0.2357 | 0.3769 | 0.3992 | | [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | 無量化註記 | 0.4999 | 0.2568 | 0.4008 | 0.4243 | | [unsloth/Qwen2.5-32B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-32B-Instruct-bnb-4bit) | 無量化註記 | 0.5074 | 0.2683 | 0.4080 | 0.4297 | | [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) | 無量化註記 | 0.4150 | 0.2078 | 0.3113 | 0.3472 | | [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) | INT4 | 0.4347 | 0.2239 | 0.3299 | 0.3682 | | [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) | 無量化註記 | 0.4919 | 0.2531 | 0.3953 | 0.4222 | | [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) | 無量化註記 | 0.2212 | 0.0866 | 0.1596 | 0.1761 | | [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) | INT4 | 0.3572 | 0.1724 | 0.2678 | 0.2912 | | [unsloth/gemma-3-27b-it-unsloth-bnb-4bit](unsloth/gemma-3-27b-it-unsloth-bnb-4bit) | 無量化註記 | 0.5056 | 0.2768 | 0.4119 | 0.4347 | | [unsloth/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503) | INT4 | 0.5031 | 0.2595 | 0.4074 | 0.4363 | | [unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit) | 無量化註記 | 0.4864 | 0.2484 | 0.3898 | 0.4214 | > ✍️ 若您在研究中使用本模型,請引用以下資訊: ``` @misc{gemma3zh2en, title={gemma3-27b-it-zh2en: A Gemma-based Translation Model for English-Traditional Chinese}, author={Vickt}, howpublished={https://huggingface.co/vickt/gemma3-27b-it-zh2en}, year={2025} } ```
rendoo/05_rendoo_05_159
rendoo
2025-05-26T05:51:01Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T05:41:39Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
titan5213/Llama-3.2-1B-IA3-Merged
titan5213
2025-05-26T05:49:31Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-23T14:34:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nyrasea/mongolia
nyrasea
2025-05-26T05:41:25Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-26T05:03:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KIKU315/my-new-shiny-tokenizer
KIKU315
2025-05-26T05:40:09Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-26T05:40:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dhruvsangani/Sentiment_Analysis_of_Banking_Dataset
dhruvsangani
2025-05-26T05:30:51Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-24T15:06:02Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** dhruvsangani - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
unsloth/DeepSeek-Prover-V2-671B-BF16
unsloth
2025-05-26T05:30:32Z
26
1
transformers
[ "transformers", "safetensors", "deepseek_v3", "text-generation", "deepseek", "unsloth", "conversational", "custom_code", "en", "base_model:deepseek-ai/DeepSeek-Prover-V2-671B", "base_model:quantized:deepseek-ai/DeepSeek-Prover-V2-671B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "fp8", "region:us" ]
text-generation
2025-04-30T15:11:29Z
--- base_model: deepseek-ai/DeepSeek-Prover-V2-671B language: - en library_name: transformers tags: - deepseek - unsloth - transformers license: mit --- <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-CODE" style="margin: 2px;"> <img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-MODEL" style="margin: 2px;"> <img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## 1. Introduction We introduce DeepSeek-Prover-V2, an open-source large language model designed for formal theorem proving in Lean 4, with initialization data collected through a recursive theorem proving pipeline powered by DeepSeek-V3. The cold-start training procedure begins by prompting DeepSeek-V3 to decompose complex problems into a series of subgoals. The proofs of resolved subgoals are synthesized into a chain-of-thought process, combined with DeepSeek-V3's step-by-step reasoning, to create an initial cold start for reinforcement learning. This process enables us to integrate both informal and formal mathematical reasoning into a unified model. <p align="center"> <img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Prover-V2/blob/main/figures/performance.png?raw=true"> </p> ## 2. Model Summary --- **Synthesize Cold-Start Reasoning Data through Recursive Proof Search** - To construct the cold-start dataset, we develop a simple yet effective pipeline for recursive theorem proving, utilizing DeepSeek-V3 as a unified tool for both subgoal decomposition and formalization. We prompt DeepSeek-V3 to decompose theorems into high-level proof sketches while simultaneously formalizing these proof steps in Lean 4, resulting in a sequence of subgoals. - We use a smaller 7B model to handle the proof search for each subgoal, thereby reducing the associated computational burden. Once the decomposed steps of a challenging problem are resolved, we pair the complete step-by-step formal proof with the corresponding chain-of-thought from DeepSeek-V3 to create cold-start reasoning data. --- **Reinforcement Learning with Synthetic Cold-Start Data** - We curate a subset of challenging problems that remain unsolved by the 7B prover model in an end-to-end manner, but for which all decomposed subgoals have been successfully resolved. By composing the proofs of all subgoals, we construct a complete formal proof for the original problem. This proof is then appended to DeepSeek-V3's chain-of-thought, which outlines the corresponding lemma decomposition, thereby producing a cohesive synthesis of informal reasoning and subsequent formalization. - After fine-tuning the prover model on the synthetic cold-start data, we perform a reinforcement learning stage to further enhance its ability to bridge informal reasoning with formal proof construction. Following the standard training objective for reasoning models, we use binary correct-or-incorrect feedback as the primary form of reward supervision. - The resulting model, DeepSeek-Prover-V2-671B, achieves state-of-the-art performance in neural theorem proving, reaching $88.9$% pass ratio on the MiniF2F-test and solving 49 out of 658 problems from PutnamBench. The proofs generated by DeepSeek-Prover-V2 for the miniF2F dataset are available for download as a [ZIP archive](https://github.com/deepseek-ai/DeepSeek-Prover-V2/blob/master/minif2f-solutions.zip). --- ## 3. ProverBench: Formalization of AIME and Textbook Problems we introduce ProverBench, a benchmark dataset comprising 325 problems. Of these, 15 are formalized from number theory and algebra questions featured in the recent AIME competitions (AIME 24 and 25), offering authentic high-school competition-level challenges. The remaining 310 problems are drawn from curated textbook examples and educational tutorials, contributing a diverse and pedagogically grounded collection of formalized mathematical problems. This benchmark is designed to enable more comprehensive evaluation across both high-school competition problems and undergraduate-level mathematics. <div align="center"> | Area | Count | | :---------------------: | :-------: | | AIME 24&25 | 15 | | Number Theory | 40 | | Elementary Algebra | 30 | | Linear Algebra | 50 | | Abstract Algebra | 40 | | Calculus | 90 | | Real Analysis | 30 | | Complex Analysis | 10 | | Functional Analysis | 10 | | Probability | 10 | | Total | 325 | </div> ## 4. Model & Dataset Downloads We release DeepSeek-Prover-V2 in two model sizes: 7B and 671B parameters. DeepSeek-Prover-V2-671B is trained on top of DeepSeek-V3-Base. DeepSeek-Prover-V2-7B is built upon DeepSeek-Prover-V1.5-Base and features an extended context length of up to 32K tokens. <div align="center"> | **Model** | **Download** | | :-----------------------------: | :----------------------------------------------------------: | | DeepSeek-Prover-V2-7B | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B) | | DeepSeek-Prover-V2-671B | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B) | </div> <div align="center"> | **Dataset** | **Download** | | :-----------------------------: | :----------------------------------------------------------: | | DeepSeek-ProverBench | [🤗 HuggingFace](https://huggingface.co/datasets/deepseek-ai/DeepSeek-ProverBench) | </div> ## 5. Quick Start You can directly use [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference. DeepSeek-Prover-V2-671B shares the same architecture as DeepSeek-V3. For detailed information and supported features, please refer to [the DeepSeek-V3 documentation on Hugging Face](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/deepseek_v3.md). The following is a basic example of generating a proof for a problem from the miniF2F dataset: ````python from transformers import AutoModelForCausalLM, AutoTokenizer import torch torch.manual_seed(30) model_id = "DeepSeek-Prover-V2-7B" # or DeepSeek-Prover-V2-671B tokenizer = AutoTokenizer.from_pretrained(model_id) formal_statement = """ import Mathlib import Aesop set_option maxHeartbeats 0 open BigOperators Real Nat Topology Rat /-- What is the positive difference between $120\%$ of 30 and $130\%$ of 20? Show that it is 10.-/ theorem mathd_algebra_10 : abs ((120 : ℝ) / 100 * 30 - 130 / 100 * 20) = 10 := by sorry """.strip() prompt = """ Complete the following Lean 4 code: ```lean4 {} ``` Before producing the Lean 4 code to formally prove the given theorem, provide a detailed proof plan outlining the main proof steps and strategies. The plan should highlight key ideas, intermediate lemmas, and proof structures that will guide the construction of the final formal proof. """.strip() chat = [ {"role": "user", "content": prompt.format(formal_statement)}, ] model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) inputs = tokenizer.apply_chat_template(chat, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device) import time start = time.time() outputs = model.generate(inputs, max_new_tokens=8192) print(tokenizer.batch_decode(outputs)) print(time.time() - start) ```` ## 6. License The use of DeepSeek-Prover-V2 models is subject to [the Model License](LICENSE-MODEL). ## 7. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
DrViJ/ppo-Huggy
DrViJ
2025-05-26T05:28:26Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-05-25T21:05:32Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: DrViJ/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Shiva4113/qwen2.5-7b-instruct-unsloth-bnb-4bit-qa-family-law-v1
Shiva4113
2025-05-26T05:27:31Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-26T05:27:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MAAT-EL-DUAT/JENNA-CHATML-9000
MAAT-EL-DUAT
2025-05-26T05:26:32Z
0
0
null
[ "region:us" ]
null
2025-05-25T22:06:35Z
### EXPERIMENTS IN EXTREME REVERSE POLICY ACTION THIS IS STILL THEORY HAS NOT BEEN DONE YET
Fynd/cloth-vton
Fynd
2025-05-26T05:25:08Z
0
0
null
[ "region:us" ]
null
2025-05-26T05:24:32Z
--- title: Cloth Vton emoji: 📉 colorFrom: gray colorTo: green sdk: gradio sdk_version: 5.31.0 app_file: app.py pinned: false short_description: Cloth VTON --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
abrarlohia/cloth-vton
abrarlohia
2025-05-26T05:23:53Z
0
0
null
[ "region:us" ]
null
2025-05-26T05:21:36Z
--- title: Cloth Vton emoji: 📉 colorFrom: gray colorTo: green sdk: gradio sdk_version: 5.31.0 app_file: app.py pinned: false short_description: Cloth VTON --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
Ash2749/trial3.1_8b
Ash2749
2025-05-26T05:21:53Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T05:19:00Z
--- base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Ash2749 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TheGardener/Qwen-0.4B-shortened-llama
TheGardener
2025-05-26T05:20:56Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T05:19:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lightricks/ltxv-spatial-upscaler-0.9.7
Lightricks
2025-05-26T05:18:52Z
1,445
1
diffusers
[ "diffusers", "safetensors", "ltx-video", "video-upscaling", "video-to-video", "en", "license:other", "diffusers:LTXLatentUpsamplePipeline", "region:us" ]
null
2025-05-14T18:09:52Z
--- tags: - ltx-video - video-upscaling - diffusers - video-to-video pinned: false language: - en license: other pipeline_tag: video-to-video library_name: diffusers --- # LTX Video Spatial Upscaler 0.9.7 Model Card This model card focuses on the LTX Video Spatial Upscaler 0.9.7, a component model designed to work in conjunction with the LTX-Video generation models. The main LTX-Video codebase is available [here](https://github.com/Lightricks/LTX-Video). LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 30 FPS videos at a 1216×704 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos with realistic and varied content. We provide a model for both text-to-video as well as image+text-to-video usecases. **The LTX Video Spatial Upscaler** is a diffusion-based model that enhances the spatial resolution of videos. It is specifically trained to upscale the latent representations of videos generated by LTX Video models. <img src="./media/trailer.gif" alt="trailer" width="512"> | | | | | |:---:|:---:|:---:|:---:| | ![example1](./media/ltx-video_example_00001.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with long brown hair and light skin smiles at another woman...</summary>A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage.</details> | ![example2](./media/ltx-video_example_00002.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman walks away from a white Jeep parked on a city street at night...</summary>A woman walks away from a white Jeep parked on a city street at night, then ascends a staircase and knocks on a door. The woman, wearing a dark jacket and jeans, walks away from the Jeep parked on the left side of the street, her back to the camera; she walks at a steady pace, her arms swinging slightly by her sides; the street is dimly lit, with streetlights casting pools of light on the wet pavement; a man in a dark jacket and jeans walks past the Jeep in the opposite direction; the camera follows the woman from behind as she walks up a set of stairs towards a building with a green door; she reaches the top of the stairs and turns left, continuing to walk towards the building; she reaches the door and knocks on it with her right hand; the camera remains stationary, focused on the doorway; the scene is captured in real-life footage.</details> | ![example3](./media/ltx-video_example_00003.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with blonde hair styled up, wearing a black dress...</summary>A woman with blonde hair styled up, wearing a black dress with sequins and pearl earrings, looks down with a sad expression on her face. The camera remains stationary, focused on the woman's face. The lighting is dim, casting soft shadows on her face. The scene appears to be from a movie or TV show.</details> | ![example4](./media/ltx-video_example_00004.gif)<br><details style="max-width: 300px; margin: auto;"><summary>The camera pans over a snow-covered mountain range...</summary>The camera pans over a snow-covered mountain range, revealing a vast expanse of snow-capped peaks and valleys.The mountains are covered in a thick layer of snow, with some areas appearing almost white while others have a slightly darker, almost grayish hue. The peaks are jagged and irregular, with some rising sharply into the sky while others are more rounded. The valleys are deep and narrow, with steep slopes that are also covered in snow. The trees in the foreground are mostly bare, with only a few leaves remaining on their branches. The sky is overcast, with thick clouds obscuring the sun. The overall impression is one of peace and tranquility, with the snow-covered mountains standing as a testament to the power and beauty of nature.</details> | | ![example5](./media/ltx-video_example_00005.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with light skin, wearing a blue jacket and a black hat...</summary>A woman with light skin, wearing a blue jacket and a black hat with a veil, looks down and to her right, then back up as she speaks; she has brown hair styled in an updo, light brown eyebrows, and is wearing a white collared shirt under her jacket; the camera remains stationary on her face as she speaks; the background is out of focus, but shows trees and people in period clothing; the scene is captured in real-life footage.</details> | ![example6](./media/ltx-video_example_00006.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man in a dimly lit room talks on a vintage telephone...</summary>A man in a dimly lit room talks on a vintage telephone, hangs up, and looks down with a sad expression. He holds the black rotary phone to his right ear with his right hand, his left hand holding a rocks glass with amber liquid. He wears a brown suit jacket over a white shirt, and a gold ring on his left ring finger. His short hair is neatly combed, and he has light skin with visible wrinkles around his eyes. The camera remains stationary, focused on his face and upper body. The room is dark, lit only by a warm light source off-screen to the left, casting shadows on the wall behind him. The scene appears to be from a movie.</details> | ![example7](./media/ltx-video_example_00007.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A prison guard unlocks and opens a cell door...</summary>A prison guard unlocks and opens a cell door to reveal a young man sitting at a table with a woman. The guard, wearing a dark blue uniform with a badge on his left chest, unlocks the cell door with a key held in his right hand and pulls it open; he has short brown hair, light skin, and a neutral expression. The young man, wearing a black and white striped shirt, sits at a table covered with a white tablecloth, facing the woman; he has short brown hair, light skin, and a neutral expression. The woman, wearing a dark blue shirt, sits opposite the young man, her face turned towards him; she has short blonde hair and light skin. The camera remains stationary, capturing the scene from a medium distance, positioned slightly to the right of the guard. The room is dimly lit, with a single light fixture illuminating the table and the two figures. The walls are made of large, grey concrete blocks, and a metal door is visible in the background. The scene is captured in real-life footage.</details> | ![example8](./media/ltx-video_example_00008.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with blood on her face and a white tank top...</summary>A woman with blood on her face and a white tank top looks down and to her right, then back up as she speaks. She has dark hair pulled back, light skin, and her face and chest are covered in blood. The camera angle is a close-up, focused on the woman's face and upper torso. The lighting is dim and blue-toned, creating a somber and intense atmosphere. The scene appears to be from a movie or TV show.</details> | | ![example9](./media/ltx-video_example_00009.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man with graying hair, a beard, and a gray shirt...</summary>A man with graying hair, a beard, and a gray shirt looks down and to his right, then turns his head to the left. The camera angle is a close-up, focused on the man's face. The lighting is dim, with a greenish tint. The scene appears to be real-life footage. Step</details> | ![example10](./media/ltx-video_example_00010.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A clear, turquoise river flows through a rocky canyon...</summary>A clear, turquoise river flows through a rocky canyon, cascading over a small waterfall and forming a pool of water at the bottom.The river is the main focus of the scene, with its clear water reflecting the surrounding trees and rocks. The canyon walls are steep and rocky, with some vegetation growing on them. The trees are mostly pine trees, with their green needles contrasting with the brown and gray rocks. The overall tone of the scene is one of peace and tranquility.</details> | ![example11](./media/ltx-video_example_00011.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man in a suit enters a room and speaks to two women...</summary>A man in a suit enters a room and speaks to two women sitting on a couch. The man, wearing a dark suit with a gold tie, enters the room from the left and walks towards the center of the frame. He has short gray hair, light skin, and a serious expression. He places his right hand on the back of a chair as he approaches the couch. Two women are seated on a light-colored couch in the background. The woman on the left wears a light blue sweater and has short blonde hair. The woman on the right wears a white sweater and has short blonde hair. The camera remains stationary, focusing on the man as he enters the room. The room is brightly lit, with warm tones reflecting off the walls and furniture. The scene appears to be from a film or television show.</details> | ![example12](./media/ltx-video_example_00012.gif)<br><details style="max-width: 300px; margin: auto;"><summary>The waves crash against the jagged rocks of the shoreline...</summary>The waves crash against the jagged rocks of the shoreline, sending spray high into the air.The rocks are a dark gray color, with sharp edges and deep crevices. The water is a clear blue-green, with white foam where the waves break against the rocks. The sky is a light gray, with a few white clouds dotting the horizon.</details> | | ![example13](./media/ltx-video_example_00013.gif)<br><details style="max-width: 300px; margin: auto;"><summary>The camera pans across a cityscape of tall buildings...</summary>The camera pans across a cityscape of tall buildings with a circular building in the center. The camera moves from left to right, showing the tops of the buildings and the circular building in the center. The buildings are various shades of gray and white, and the circular building has a green roof. The camera angle is high, looking down at the city. The lighting is bright, with the sun shining from the upper left, casting shadows from the buildings. The scene is computer-generated imagery.</details> | ![example14](./media/ltx-video_example_00014.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man walks towards a window, looks out, and then turns around...</summary>A man walks towards a window, looks out, and then turns around. He has short, dark hair, dark skin, and is wearing a brown coat over a red and gray scarf. He walks from left to right towards a window, his gaze fixed on something outside. The camera follows him from behind at a medium distance. The room is brightly lit, with white walls and a large window covered by a white curtain. As he approaches the window, he turns his head slightly to the left, then back to the right. He then turns his entire body to the right, facing the window. The camera remains stationary as he stands in front of the window. The scene is captured in real-life footage.</details> | ![example15](./media/ltx-video_example_00015.gif)<br><details style="max-width: 300px; margin: auto;"><summary>Two police officers in dark blue uniforms and matching hats...</summary>Two police officers in dark blue uniforms and matching hats enter a dimly lit room through a doorway on the left side of the frame. The first officer, with short brown hair and a mustache, steps inside first, followed by his partner, who has a shaved head and a goatee. Both officers have serious expressions and maintain a steady pace as they move deeper into the room. The camera remains stationary, capturing them from a slightly low angle as they enter. The room has exposed brick walls and a corrugated metal ceiling, with a barred window visible in the background. The lighting is low-key, casting shadows on the officers' faces and emphasizing the grim atmosphere. The scene appears to be from a film or television show.</details> | ![example16](./media/ltx-video_example_00016.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with short brown hair, wearing a maroon sleeveless top...</summary>A woman with short brown hair, wearing a maroon sleeveless top and a silver necklace, walks through a room while talking, then a woman with pink hair and a white shirt appears in the doorway and yells. The first woman walks from left to right, her expression serious; she has light skin and her eyebrows are slightly furrowed. The second woman stands in the doorway, her mouth open in a yell; she has light skin and her eyes are wide. The room is dimly lit, with a bookshelf visible in the background. The camera follows the first woman as she walks, then cuts to a close-up of the second woman's face. The scene is captured in real-life footage.</details> | **This upscaler model is compatible with and can be used to improve the output quality of videos generated by both:** * `Lightricks/LTX-Video-0.9.7-dev` * `Lightricks/LTX-Video-0.9.7-distilled` ## Model Details - **Developed by:** Lightricks - **Model type:** Latent Diffusion Video Spatial Upscaler - **Input:** Latent video frames from an LTX Video model. - **Output:** Higher-resolution latent video frames. - **Compatibility:** can be used with `Lightricks/LTX-Video-0.9.7-dev` and `Lightricks/LTX-Video-0.9.7-distilled`. ## Usage ### Direct use You can use the model for purposes under the license: - 2B version 0.9: [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.license.txt) - 2B version 0.9.1 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.1.license.txt) - 2B version 0.9.5 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.5.license.txt) - 2B version 0.9.6-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-dev-04-25.license.txt) - 2B version 0.9.6-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-distilled-04-25.license.txt) - 13B version 0.9.7-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.license.txt) - 13B version 0.9.7-dev-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev-fp8.license.txt) - 13B version 0.9.7-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled.license.txt) - 13B version 0.9.7-distilled-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-fp8.license.txt) - 13B version 0.9.7-distilled-lora128 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-lora128.license.txt) - Temporal upscaler version 0.9.7 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-temporal-upscaler-0.9.7.license.txt) - Spatial upscaler version 0.9.7 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-spatial-upscaler-0.9.7.license.txt) ### General tips: * The model works on resolutions that are divisible by 32 and number of frames that are divisible by 8 + 1 (e.g. 257). In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input will be padded with -1 and then cropped to the desired resolution and number of frames. * The model works best on resolutions under 720 x 1280 and number of frames below 257. * Prompts should be in English. The more elaborate the better. Good prompt looks like `The turquoise waves crash against the dark, jagged rocks of the shore, sending white foam spraying into the air. The scene is dominated by the stark contrast between the bright blue water and the dark, almost black rocks. The water is a clear, turquoise color, and the waves are capped with white foam. The rocks are dark and jagged, and they are covered in patches of green moss. The shore is lined with lush green vegetation, including trees and bushes. In the background, there are rolling hills covered in dense forest. The sky is cloudy, and the light is dim.` ### Online demo The model is accessible right away via the following links: - [LTX-Studio image-to-video](https://app.ltx.studio/ltx-video) - [Fal.ai text-to-video](https://fal.ai/models/fal-ai/ltx-video) - [Fal.ai image-to-video](https://fal.ai/models/fal-ai/ltx-video/image-to-video) - [Replicate text-to-video and image-to-video](https://replicate.com/lightricks/ltx-video) ### ComfyUI To use our model with ComfyUI, please follow the instructions at a dedicated [ComfyUI repo](https://github.com/Lightricks/ComfyUI-LTXVideo/). ### Run locally #### Installation The codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2. ```bash git clone https://github.com/Lightricks/LTX-Video.git cd LTX-Video # create env python -m venv env source env/bin/activate python -m pip install -e .\[inference-script\] ``` #### Inference To use our model, please follow the inference code in [inference.py](https://github.com/Lightricks/LTX-Video/blob/main/inference.py): ### Diffusers 🧨 LTX Video is compatible with the [Diffusers Python library](https://huggingface.co/docs/diffusers/main/en/index). It supports both text-to-video and image-to-video generation. Make sure you install `diffusers` before trying out the examples below. ```bash pip install -U git+https://github.com/huggingface/diffusers ``` The LTX Video Spatial Upscaler is used via the `LTXLatentUpsamplePipeline` in the `diffusers` library. It is intended to be part of a multi-stage generation process. Below is an example demonstrating how to use the spatial upsampler with a base LTX Video model (either the 'dev' or 'distilled' version). ```py import torch from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition from diffusers.utils import export_to_video, load_image # Choose your base LTX Video model: # base_model_id = "Lightricks/LTX-Video-0.9.7-dev" base_model_id = "Lightricks/LTX-Video-0.9.7-distilled" # Using distilled for this example # 0. Load base model and upsampler pipe = LTXConditionPipeline.from_pretrained(base_model_id, torch_dtype=torch.bfloat16) pipe_upsample = LTXLatentUpsamplePipeline.from_pretrained( "Lightricks/ltxv-spatial-upscaler-0.9.7", vae=pipe.vae, torch_dtype=torch.bfloat16 ) pipe.to("cuda") pipe_upsample.to("cuda") def round_to_nearest_resolution_acceptable_by_vae(height, width): height = height - (height % pipe.vae_temporal_compression_ratio) width = width - (width % pipe.vae_temporal_compression_ratio) return height, width video = load_video( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4" )[:21] # Use only the first 21 frames as conditioning condition1 = LTXVideoCondition(video=video, frame_index=0) prompt = "The video depicts a winding mountain road covered in snow, with a single vehicle traveling along it. The road is flanked by steep, rocky cliffs and sparse vegetation. The landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the solitude and beauty of a winter drive through a mountainous region." negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted" expected_height, expected_width = 768, 1152 downscale_factor = 2 / 3 num_frames = 161 # Part 1. Generate video at smaller resolution downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor) downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width) latents = pipe( conditions=[condition1], prompt=prompt, negative_prompt=negative_prompt, width=downscaled_width, height=downscaled_height, num_frames=num_frames, num_inference_steps=30, generator=torch.Generator().manual_seed(0), output_type="latent", ).frames # Part 2. Upscale generated video using latent upsampler with fewer inference steps # The available latent upsampler upscales the height/width by 2x upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2 upscaled_latents = pipe_upsample( latents=latents, output_type="latent" ).frames # Part 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended) video = pipe( conditions=[condition1], prompt=prompt, negative_prompt=negative_prompt, width=upscaled_width, height=upscaled_height, num_frames=num_frames, denoise_strength=0.4, # Effectively, 4 inference steps out of 10 num_inference_steps=10, latents=upscaled_latents, decode_timestep=0.05, image_cond_noise_scale=0.025, generator=torch.Generator().manual_seed(0), output_type="pil", ).frames[0] # Part 4. Downscale the video to the expected resolution video = [frame.resize((expected_width, expected_height)) for frame in video] export_to_video(video, "output.mp4", fps=24) ``` for more details and inference examples using 🧨 diffusers, check out the [diffusers documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video) Diffusers also supports directly loading from the original LTX checkpoints using the `from_single_file()` method. Check out [this section](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video#loading-single-files) to learn more. To learn more, check out the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video). ## Limitations - This model is not intended or able to provide factual information. - As a statistical model this checkpoint might amplify existing societal biases. - The model may fail to generate videos that matches the prompts perfectly. - Prompt following is heavily influenced by the prompting-style.
Firoj112/v1-vits-model
Firoj112
2025-05-26T05:16:30Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-05-26T05:16:17Z
# my_vits_model ## Model Description A VITS-based TTS model for English speech synthesis - **Language(s)**: English - **Type**: Single-speaker Text-to-Speech - **Model Type**: VITS - **Framework**: Coqui TTS - **Uploaded**: 2025-05-26 ## Intended Use - **Primary Use**: Generating single-speaker speech from text input for applications like virtual assistants, audiobooks, or accessibility tools. - **Out of Scope**: Real-time applications if not optimized for low latency. ## Usage To load and use the model: ```python from safetensors.torch import load_file from TTS.config import load_config from TTS.tts.models import setup_model # Load configuration config = load_config("config.json") model = setup_model(config) # Load weights state_dict = load_file("my_vits_model.safetensors") model.load_state_dict(state_dict) model.eval() # Example inference text = "Hello, this is a test." wav = model.inference(text, speaker_id=0 if False else None) ``` ## Training Data - **Dataset**: Custom dataset - **Preprocessing**: Text normalized, audio sampled at 22050 Hz ## Evaluation - **Metrics**: [Add metrics, e.g., Mean Opinion Score (MOS), Word Error Rate (WER)] - **Results**: [Add results, e.g., "Achieved MOS of 4.2 on test set"] ## Limitations - Limited to English language(s). - Performance may vary with noisy or complex input text. - ## License - Released under apache-2.0. ## Ethical Considerations - Ensure responsible use to avoid generating misleading or harmful audio content. - Verify input text to prevent biased or offensive outputs. ## Dependencies - `TTS` (Coqui TTS) - `safetensors` - `torch`
g-assismoraes/gemma-3-1b-it-agnews
g-assismoraes
2025-05-26T05:16:23Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "conversational", "base_model:google/gemma-3-1b-it", "base_model:finetune:google/gemma-3-1b-it", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T02:33:58Z
--- library_name: transformers license: gemma base_model: google/gemma-3-1b-it tags: - generated_from_trainer model-index: - name: gemma-3-1b-it-agnews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-3-1b-it-agnews This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1073 | 1.0 | 27000 | 1.1091 | | 1.0571 | 2.0 | 54000 | 1.1085 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
andyrdt/rl_loans
andyrdt
2025-05-26T05:10:51Z
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2025-05-26T04:35:04Z
--- license: apache-2.0 --- This repository contains models from the blog post [Do models say what they learn?](https://www.lesswrong.com/posts/abtegBoDfnCzewndm/do-models-say-what-they-learn). Training code is available [here](https://github.com/andyrdt/rl_loans).
Linslab/VLA-OS
Linslab
2025-05-26T05:09:32Z
0
0
null
[ "region:us" ]
null
2025-05-20T05:12:01Z
Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/fe/56/fe56c36eeeb5f04d9eff66e104f774de9a08ea487e8dbc4e43027abc76afb994/11acded461f42ddd393e4a1225e93176131fd87902cf02418c050e612b3e4573?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1748244527&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0ODI0NDUyN319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zL2ZlLzU2L2ZlNTZjMzZlZWViNWYwNGQ5ZWZmNjZlMTA0Zjc3NGRlOWEwOGVhNDg3ZThkYmM0ZTQzMDI3YWJjNzZhZmI5OTQvMTFhY2RlZDQ2MWY0MmRkZDM5M2U0YTEyMjVlOTMxNzYxMzFmZDg3OTAyY2YwMjQxOGMwNTBlNjEyYjNlNDU3Mz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=O6YQB9kjhBkGmnqdrFL%7ENrcOYHTSMZbhCD8VRtlYlYZ4TqquKmRq-XAMzfugCEnGqzPidqksKX-C3D3Jr3wec3Zp-ay%7Ev-h--DavYwOCmuCF-9ertu5eCjxLVvsTiU8DmBBaIjcWkEO52XVzwhcLJ%7E2bjsa-kJCP7Wi6grlSqLrJ45qzoxjFrsDo1nyszV77kBpOucU9uDsOQU%7EkdH8o%7E0pHUVD-6dRyL5N51Kix--MOJ46x8T8SI%7EkiW%7EFtw7w2zdTuaTle01J4dc%7E1KziPkpGxT-NDBPW%7EOkLt7gmAQD0qVFFuwrcosoJX66vRZUBObbDwPf2gMzqOOTLUhJxg-w__&Key-Pair-Id=K24J24Z295AEI9
SaoSamarth/openai-whisper-large-v2-Khmer-update-1
SaoSamarth
2025-05-26T05:08:32Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-26T05:08:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chaimachabir/lora-data1-data2-tinyllama
chaimachabir
2025-05-26T05:01:19Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2025-05-26T03:52:42Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - generated_from_trainer model-index: - name: lora-data1-data2-tinyllama results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora-data1-data2-tinyllama This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
srosalesr/HF_practical_distilbert-base-uncased
srosalesr
2025-05-26T05:01:07Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-25T22:18:34Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: HF_practical_distilbert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HF_practical_distilbert-base-uncased This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0005 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3241 | 1.0 | 7 | 0.0377 | 1.0 | | 0.0174 | 2.0 | 14 | 0.0050 | 1.0 | | 0.0035 | 3.0 | 21 | 0.0018 | 1.0 | | 0.0015 | 4.0 | 28 | 0.0011 | 1.0 | | 0.001 | 5.0 | 35 | 0.0008 | 1.0 | | 0.0008 | 6.0 | 42 | 0.0006 | 1.0 | | 0.0007 | 7.0 | 49 | 0.0006 | 1.0 | | 0.0006 | 8.0 | 56 | 0.0005 | 1.0 | | 0.0007 | 9.0 | 63 | 0.0005 | 1.0 | | 0.0005 | 10.0 | 70 | 0.0005 | 1.0 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
kyu5787/exaone-2.4b-mlx
kyu5787
2025-05-26T04:58:48Z
0
0
mlx
[ "mlx", "safetensors", "exaone", "lg-ai", "exaone-3.5", "text-generation", "conversational", "custom_code", "en", "ko", "base_model:LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct", "base_model:finetune:LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct", "license:other", "region:us" ]
text-generation
2025-05-26T04:55:44Z
--- license: other license_name: exaone license_link: LICENSE language: - en - ko tags: - lg-ai - exaone - exaone-3.5 - mlx pipeline_tag: text-generation library_name: mlx base_model: LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct --- # kyu5787/exaone-2.4b-mlx This model [kyu5787/exaone-2.4b-mlx](https://huggingface.co/kyu5787/exaone-2.4b-mlx) was converted to MLX format from [LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct) using mlx-lm version **0.24.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("kyu5787/exaone-2.4b-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
BishakhaBiswas/custom-generate-demo
BishakhaBiswas
2025-05-26T04:57:24Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-26T04:57:24Z
--- license: apache-2.0 ---
mdmy/vision-only-v1
mdmy
2025-05-26T04:52:44Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-26T03:12:47Z
--- base_model: Qwen/Qwen2-VL-7B-Instruct library_name: transformers model_name: vision-only-v1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for vision-only-v1 This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mdmy/vision-only-v1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/paypal/250525-qwen2-7b-instruct-sft-nutrition-table-detection/runs/ex0fvlzo) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.50.1 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
takahashi111/Qwen3-0.6B-unsloth-bnb-4bit_hiragana2katakana_20250524_checkpoint-16830
takahashi111
2025-05-26T04:51:42Z
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-26T04:51:37Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
takahashi111/umt5-base_hiragana2katakana_20250519_epoch14
takahashi111
2025-05-26T04:48:15Z
0
0
transformers
[ "transformers", "safetensors", "umt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-26T04:46:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PonyDing/Mayishenxiang-Llama-model-8B
PonyDing
2025-05-26T04:45:38Z
0
0
null
[ "gguf", "llama", "算命", "预测", "text-generation", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-23T06:04:28Z
--- base_model: - meta-llama/Llama-3.1-8B-Instruct pipeline_tag: text-generation tags: - 算命 - 预测 ---
mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF
mradermacher
2025-05-26T04:44:31Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "open-r1", "trl", "sft", "en", "dataset:dataset/processed_data_bailian.jsonl", "base_model:angelchen/Qwen3-4B-Open-R1-Distill_1", "base_model:quantized:angelchen/Qwen3-4B-Open-R1-Distill_1", "endpoints_compatible", "region:us" ]
null
2025-05-25T02:26:20Z
--- base_model: angelchen/Qwen3-4B-Open-R1-Distill_1 datasets: dataset/processed_data_bailian.jsonl language: - en library_name: transformers model_name: Qwen3-4B-Open-R1-Distill_1 quantized_by: mradermacher tags: - generated_from_trainer - open-r1 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/angelchen/Qwen3-4B-Open-R1-Distill_1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.Q2_K.gguf) | Q2_K | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.IQ4_XS.gguf) | IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.Q5_K_M.gguf) | Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.Q6_K.gguf) | Q6_K | 3.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Open-R1-Distill_1-GGUF/resolve/main/Qwen3-4B-Open-R1-Distill_1.f16.gguf) | f16 | 8.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wily_arctic_sardine
Oceans-ID
2025-05-26T04:44:27Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am wily arctic sardine", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-26T04:44:24Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wily_arctic_sardine tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am wily arctic sardine - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wily_arctic_sardine This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wily_arctic_sardine", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jyoung105/ent2_t11
jyoung105
2025-05-26T04:42:31Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-26T04:22:35Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Ent2_T11 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/jyoung105/ent2_t11/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jyoung105/ent2_t11', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 64 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jyoung105/ent2_t11/discussions) to add images that show off what you’ve made with this LoRA.
unsloth/Llama-3_1-Nemotron-Ultra-253B-v1-GGUF
unsloth
2025-05-26T04:42:22Z
2,725
9
transformers
[ "transformers", "gguf", "nemotron-nas", "text-generation", "nvidia", "llama-3", "pytorch", "custom_code", "en", "arxiv:2503.18908", "arxiv:2505.00949", "arxiv:2502.00203", "arxiv:2411.19146", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-13T05:06:45Z
--- library_name: transformers license: other license_name: nvidia-open-model-license license_link: >- https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ pipeline_tag: text-generation language: - en tags: - nvidia - llama-3 - pytorch --- # Llama-3.1-Nemotron-Ultra-253B-v1 ## Model Overview ![Accuracy Plot](./accuracy_plot.png) Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) which is a derivative of [Meta Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct) (AKA the *reference model*). It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling. The model supports a context length of 128K tokens. This model fits on a single 8xH100 node for inference. Llama-3.1-Nemotron-Ultra-253B-v1 is a model which offers a great tradeoff between model accuracy and efficiency. Efficiency (throughput) directly translates to savings. Using a novel Neural Architecture Search (NAS) approach, we greatly reduce the model’s memory footprint, enabling larger workloads, as well as reducing the number of GPUs required to run the model in a data center environment. This NAS approach enables the selection of a desired point in the accuracy-efficiency tradeoff. Furthermore, by using a novel method to vertically compress the model (see details [here](https://arxiv.org/abs/2503.18908)), it also offers a significant improvement in latency. The model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, Chat, and Tool Calling as well as multiple reinforcement learning (RL) stages using Group Relative Policy Optimization (GRPO) algorithms for reasoning, chat, and instruction-following. This model is ready for commercial use. For more details on how the model was trained, please see our [technical report](https://arxiv.org/abs/2505.00949) and [blog](https://developer.nvidia.com/blog/build-enterprise-ai-agents-with-advanced-open-nvidia-llama-nemotron-reasoning-models/). ![Training Flow](./training_flowchart.png) This model is part of the Llama Nemotron Collection. You can find the other model(s) in this family here: - [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) - [Llama-3.3-Nemotron-Super-49B-v1](https://huggingface.co/nvidia/Llama-3\_3-Nemotron-Super-49B-v1) ## License/Terms of Use GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Open Model License.](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) Additional Information: [Llama 3.1 Community License Agreement](https://www.llama.com/llama3\_1/license/). Built with Llama. **Model Developer:** NVIDIA **Model Dates:** Trained between November 2024 and April 2025 **Data Freshness:** The pretraining data has a cutoff of 2023 per Llama-3.1-405B-Instruct ### Use Case: Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks. ### Release Date: 2025-04-07 ## References * [\[2505.00949\] Llama-Nemotron: Efficient Reasoning Models](https://arxiv.org/abs/2505.00949) * [\[2502.00203\] Reward-aware Preference Optimization: A Unified Mathematical Framework for Model Alignment](https://arxiv.org/abs/2502.00203) * [\[2411.19146\]Puzzle: Distillation-Based NAS for Inference-Optimized LLMs](https://arxiv.org/abs/2411.19146) * [\[2503.18908\]FFN Fusion: Rethinking Sequential Computation in Large Language Models](https://arxiv.org/abs/2503.18908) ## Model Architecture **Architecture Type:** Dense decoder-only Transformer model **Network Architecture:** Llama-3.1-405B-Instruct, customized through Neural Architecture Search (NAS) **This model was developed based on Llama-3.1-405B-Instruct <br> ** This model has 253B model parameters. <br> The model is a derivative of Llama 3.1-405B-Instruct, using Neural Architecture Search (NAS). The NAS algorithm results in non-standard and non-repetitive blocks. This includes the following: * Skip attention: In some blocks, the attention is skipped entirely, or replaced with a single linear layer. * Variable FFN: The expansion/compression ratio in the FFN layer is different between blocks. * FFN Fusion: When several consecutive attention layers are skipped, which can result in a sequence of multiple FFNs, that sequence of FFNs are fused into a smaller number of wider FFN layers. For each block of the reference model, we create multiple variants providing different tradeoffs of quality vs. computational complexity, discussed in more depth below. We then search over the blocks to create a model which meets the required throughput and memory while minimizing the quality degradation. To recover performance, the model initially undergoes knowledge distillation (KD) for 65 billion tokens. This is followed by a continual pretraining (CPT) phase for 88 billion tokens. ## Intended use Llama-3.1-Nemotron-Ultra-253B-v1 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (German, French, Italian, Portuguese, Hindi, Spanish, and Thai) are also supported. ## Input - **Input Type:** Text - **Input Format:** String - **Input Parameters:** One-Dimensional (1D) - **Other Properties Related to Input:** Context length up to 131,072 tokens ## Output - **Output Type:** Text - **Output Format:** String - **Output Parameters:** One-Dimensional (1D) - **Other Properties Related to Output:** Context length up to 131,072 tokens ## Software Integration - **Runtime Engine:** Transformers - **Recommended Hardware Microarchitecture Compatibility:** - NVIDIA Hopper - NVIDIA Ampere - **Preferred Operating System(s):** Linux ## Model Version 1.0 (4/7/2025) ## Quick Start and Usage Recommendations: 1. Reasoning mode (ON/OFF) is controlled via the system prompt, which must be set as shown in the example below. All instructions should be contained within the user prompt 2. We recommend setting temperature to \`0.6\`, and Top P to \`0.95\` for Reasoning ON mode 3. We recommend using greedy decoding (temperature 0\) for Reasoning OFF mode 4. We do not recommend to add additional system prompts besides the control prompt, all instructions should be put into user query 5. We have provided a list of prompts to use for evaluation for each benchmark where a specific template is required 6. The model will include `<think></think>` if no reasoning was necessary in Reasoning ON model, this is expected behaviour You can try this model out through the preview API, using this link: [Llama-3\_1-Nemotron-Ultra-253B-v1](https://build.nvidia.com/nvidia/llama-3\_1-nemotron-ultra-253b-v1). ### Use It with Transformers See the snippet below for usage with [Hugging Face Transformers](https://huggingface.co/docs/transformers/main/en/index) library. Reasoning mode (ON/OFF) is controlled via system prompt. Please see the example below We recommend using the *transformers* package with version 4.48.3. Example of reasoning on: ```py import torch import transformers model_id = "nvidia/Llama-3_1-Nemotron-Ultra-253B-v1" model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"} tokenizer = transformers.AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token_id = tokenizer.eos_token_id pipeline = transformers.pipeline( "text-generation", model=model_id, tokenizer=tokenizer, max_new_tokens=32768, temperature=0.6, top_p=0.95, **model_kwargs ) thinking = "on" print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"},{"role": "user", "content": "Solve x*(sin(x)+2)=0"}])) ``` Example of reasoning off: ```py import torch import transformers model_id = "nvidia/Llama-3_1-Nemotron-ULtra-253B-v1" model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"} tokenizer = transformers.AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token_id = tokenizer.eos_token_id pipeline = transformers.pipeline( "text-generation", model=model_id, tokenizer=tokenizer, max_new_tokens=32768, do_sample=False, **model_kwargs ) thinking = "off" print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"},{"role": "user", "content": "Solve x*(sin(x)+2)=0"}])) ``` ### Use It with vLLM ``` pip install vllm==0.8.3 ``` An example on how to serve with vLLM: ``` python3 -m vllm.entrypoints.openai.api_server \ --model "nvidia/Llama-3_1-Nemotron-Ultra-253B-v1" \ --trust-remote-code \ --seed=1 \ --host="0.0.0.0" \ --port=5000 \ --served-model-name "nvidia/Llama-3_1-Nemotron-Ultra-253B-v1" \ --tensor-parallel-size=8 \ --max-model-len=32768 \ --gpu-memory-utilization 0.95 \ --enforce-eager ``` ## Inference: **Engine:** - Transformers **Test Hardware:** - BF16: - 8x NVIDIA H100-80GB - 4x NVIDIA B100 - FP 8 - 4x NVIDIA H100-80GB ## Training and Evaluation Datasets ## Training Datasets A large variety of training data was used for the knowledge distillation phase before post-training pipeline, 3 of which included: FineWeb, Buzz-V1.2, and Dolma. The data for the multi-stage post-training phases is a compilation of SFT and RL data that supports improvements of math, code, general reasoning, and instruction following capabilities of the original Llama instruct model. Prompts have been sourced from either public and open corpus or synthetically generated. Responses were synthetically generated by a variety of models, with some prompts containing responses for both reasoning on and off modes, to train the model to distinguish between two modes. This model was improved with Qwen. We have released our [Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset) to promote openness and transparency in model development and improvement. **Data Collection for Training Datasets:** - Hybrid: Automated, Human, Synthetic **Data Labeling for Training Datasets:** - Hybrid: Automated, Human, Synthetic ## Evaluation Datasets We used the datasets listed in the next section to evaluate Llama-3.1-Nemotron-Ultra-253B-v1. Data Collection for Evaluation Datasets: - Hybrid: Human/Synthetic Data Labeling for Evaluation Datasets: - Hybrid: Human/Synthetic/Automatic ## Evaluation Results *These results contain both Reasoning On, and Reasoning Off. We recommend using temperature=\`0.6\`, top\_p=\`0.95\` for Reasoning On mode, and greedy decoding for Reasoning Off mode. All evaluations are done with 32k sequence length. We run the benchmarks up to 16 times and average the scores to be more accurate.* > NOTE: Where applicable, a Prompt Template will be provided. While completing benchmarks, please ensure that you are parsing for the correct output format as per the provided prompt in order to reproduce the benchmarks seen below. ### GPQA | Reasoning Mode | pass@1 | |--------------|------------| | Reasoning Off | 56.60 | | Reasoning On | 76.01 | User Prompt Template: ``` "What is the correct answer to this question: {question}\nChoices:\nA. {option_A}\nB. {option_B}\nC. {option_C}\nD. {option_D}\nLet's think step by step, and put the final answer (should be a single letter A, B, C, or D) into a \boxed{}" ``` ### AIME25 | Reasoning Mode | pass@1 | |--------------|------------| | Reasoning Off | 16.67 | | Reasoning On | 72.50 | User Prompt Template: ``` "Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}" ``` ### BFCL V2 Live | Reasoning Mode | Score | |--------------|------------| | Reasoning Off | 73.62 | | Reasoning On | 74.10 | User Prompt Template: ``` You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose. If none of the function can be used, point it out. If the given question lacks the parameters required by the function, also point it out. You should only return the function call in tools call sections. If you decide to invoke any of the function(s), you MUST put it in the format of <TOOLCALL>[func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]</TOOLCALL> You SHOULD NOT include any other text in the response. Here is a list of functions in JSON format that you can invoke. <AVAILABLE_TOOLS>{functions}</AVAILABLE_TOOLS> {user_prompt} ``` ### LiveCodeBench (20240801-20250201) | Reasoning Mode | pass@1 | |--------------|------------| | Reasoning Off | 29.03 | | Reasoning On | 66.31 | User Prompt Template (without starter code): ```` "You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests. Question: {prompt} Read the inputs from stdin solve the problem and write the answer to stdout (do not directly test on the sample inputs). Enclose your code within delimiters as follows. Ensure that when the python program runs, it reads the inputs, runs the algorithm and writes output to STDOUT. ```python # YOUR CODE HERE ``` ```` User Prompt Template (with starter code): ```` You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests. Question: {prompt} You will use the following starter code to write the solution to the problem and enclose your code within delimiters. ```python {starter_code} ``` ```` ### IFEval | Reasoning Mode | Strict:Instruction | |--------------|------------| | Reasoning Off | 88.85 | | Reasoning On | 89.45 | ### MATH500 | Reasoning Mode | pass@1 | |--------------|------------| | Reasoning Off | 80.40 | | Reasoning On | 97.00 | User Prompt Template: ``` "Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}" ``` ### JudgeBench | Reasoning Mode | Knowledge Score | Reasoning Score | Math Score | Coding Score | Overall Score | |--------------|------------|------------|------------|------------|------------| | Reasoning On | 70.13 | 81.63 | 89.29 | 92.86 | 79.14 | ## Ethical Considerations: NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](./EXPLAINABILITY.md), [Bias](./BIAS.md), [Safety & Security](./SAFETY_and_SECURITY.md), and [Privacy](./PRIVACY.md) Subcards. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). ## Citation ``` @misc{bercovich2025llamanemotronefficientreasoningmodels, title={Llama-Nemotron: Efficient Reasoning Models}, author={Akhiad Bercovich and Itay Levy and Izik Golan and Mohammad Dabbah and Ran El-Yaniv and Omri Puny and Ido Galil and Zach Moshe and Tomer Ronen and Najeeb Nabwani and Ido Shahaf and Oren Tropp and Ehud Karpas and Ran Zilberstein and Jiaqi Zeng and Soumye Singhal and Alexander Bukharin and Yian Zhang and Tugrul Konuk and Gerald Shen and Ameya Sunil Mahabaleshwarkar and Bilal Kartal and Yoshi Suhara and Olivier Delalleau and Zijia Chen and Zhilin Wang and David Mosallanezhad and Adi Renduchintala and Haifeng Qian and Dima Rekesh and Fei Jia and Somshubra Majumdar and Vahid Noroozi and Wasi Uddin Ahmad and Sean Narenthiran and Aleksander Ficek and Mehrzad Samadi and Jocelyn Huang and Siddhartha Jain and Igor Gitman and Ivan Moshkov and Wei Du and Shubham Toshniwal and George Armstrong and Branislav Kisacanin and Matvei Novikov and Daria Gitman and Evelina Bakhturina and Jane Polak Scowcroft and John Kamalu and Dan Su and Kezhi Kong and Markus Kliegl and Rabeeh Karimi and Ying Lin and Sanjeev Satheesh and Jupinder Parmar and Pritam Gundecha and Brandon Norick and Joseph Jennings and Shrimai Prabhumoye and Syeda Nahida Akter and Mostofa Patwary and Abhinav Khattar and Deepak Narayanan and Roger Waleffe and Jimmy Zhang and Bor-Yiing Su and Guyue Huang and Terry Kong and Parth Chadha and Sahil Jain and Christine Harvey and Elad Segal and Jining Huang and Sergey Kashirsky and Robert McQueen and Izzy Putterman and George Lam and Arun Venkatesan and Sherry Wu and Vinh Nguyen and Manoj Kilaru and Andrew Wang and Anna Warno and Abhilash Somasamudramath and Sandip Bhaskar and Maka Dong and Nave Assaf and Shahar Mor and Omer Ullman Argov and Scot Junkin and Oleksandr Romanenko and Pedro Larroy and Monika Katariya and Marco Rovinelli and Viji Balas and Nicholas Edelman and Anahita Bhiwandiwalla and Muthu Subramaniam and Smita Ithape and Karthik Ramamoorthy and Yuting Wu and Suguna Varshini Velury and Omri Almog and Joyjit Daw and Denys Fridman and Erick Galinkin and Michael Evans and Katherine Luna and Leon Derczynski and Nikki Pope and Eileen Long and Seth Schneider and Guillermo Siman and Tomasz Grzegorzek and Pablo Ribalta and Monika Katariya and Joey Conway and Trisha Saar and Ann Guan and Krzysztof Pawelec and Shyamala Prayaga and Oleksii Kuchaiev and Boris Ginsburg and Oluwatobi Olabiyi and Kari Briski and Jonathan Cohen and Bryan Catanzaro and Jonah Alben and Yonatan Geifman and Eric Chung and Chris Alexiuk}, year={2025}, eprint={2505.00949}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.00949}, } ```
papaymaguire/noopsdg-experiment-lora
papaymaguire
2025-05-26T04:41:04Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-26T04:40:39Z
--- base_model: Qwen/Qwen2.5-3B-Instruct library_name: transformers model_name: noopsdg-experiment-lora tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for noopsdg-experiment-lora This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="papaymaguire/noopsdg-experiment-lora", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.1 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Suraponn/Llama-SEA-LION-audio-preview
Suraponn
2025-05-26T04:35:19Z
8
0
transformers
[ "transformers", "pytorch", "sealionaudio", "feature-extraction", "text-generation", "custom_code", "th", "en", "license:llama3", "region:us" ]
text-generation
2025-05-24T15:52:06Z
--- library_name: transformers license: llama3 language: - th - en pipeline_tag: text-generation --- # Llama-SEA-LION-audio-preview **Llama-SEA-LION-audio-preview** is a 🇹🇭 Thai audio-language model designed to natively support both text and audio inputs, with text output. This is a research preview, the result of a collaborative effort between SCB10X and AI Singapore. The model is built on top of [aisingapore/Llama-SEA-LION-v3-8B-IT](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT), a powerful instruction-tuned language model for Southeast Asian languages. ## Model Description - **Model type**: The LLM is based on Llama-SEA-LION-v3-8B-IT, and the audio encoder is based on Whisper's encoder and BEATs. - **Requirement**: transformers 4.45.0 - **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧 - **License**: [Llama 3 Community License](https://llama.meta.com/llama3/license/) ## Usage Example ```python from transformers import AutoModel import soundfile as sf import librosa # Initialize from the trained model model = AutoModel.from_pretrained( "", torch_dtype=torch.float16, trust_remote_code=True ) model.to("cuda") model.eval() # read a wav file (it needs to be in 16 kHz and clipped to 30 seconds) audio, sr = sf.read("path_to_your_audio.wav") if len(audio.shape) == 2: audio = audio[:, 0] if len(audio) > 30 * sr: audio = audio[: 30 * sr] if sr != 16000: audio = librosa.resample(audio, orig_sr=sr, target_sr=16000, res_type="fft") # Run generation prompt_pattern="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\n<Speech><SpeechHere></Speech> {}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" response = model.generate( audio=audio, prompt="transcribe this audio", prompt_pattern=prompt_pattern, do_sample=False, max_new_tokens=512, repetition_penalty=1.1, num_beams=1, # temperature=0.4, # top_p=0.9, ) print(response) ``` **Generation Parameters**: - audio -- audio input, e.g., using `soundfile.read` or `librosa.resample` to read a wav file like the example above - prompt (`str`) -- Text input to the model - prompt_pattern (`str`) -- Chat template that is augmented with special tokens, and it must be set the same as one during training - max_new_tokens (`int`, *optional*, defaults to 1024) - num_beams (`int`, *optional*, defaults to 4) - do_sample (`bool`, *optional*, defaults to True) - top_p (`float`, *optional*, defaults to 0.9) - repetition_penalty (`float`, *optional*, defaults to 1.0), - length_penalty (`float`, *optional*, defaults to 1.0), - temperature (`float`, *optional*, defaults to 1.0), This is also `model.generate_stream()` for streaming generation. Please refer to `modeling_typhoonaudio.py` for this function. ## Intended Uses & Limitations This model is experimental and may not always follow human instructions accurately, making it prone to generating hallucinations. Additionally, the model lacks moderation mechanisms and may produce harmful or inappropriate responses. Developers should carefully assess potential risks based on their specific applications. ## Acknowledgements This work builds upon the foundations laid by [scb10x/llama-3-typhoon-v1.5-8b-audio-preview](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-audio-preview) and its accompanying technical report, which we closely followed.
nell123/nell123
nell123
2025-05-26T04:32:27Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "mergekit", "merge", "conversational", "custom_code", "base_model:microsoft/Phi-3-mini-128k-instruct", "base_model:merge:microsoft/Phi-3-mini-128k-instruct", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:merge:microsoft/Phi-3-mini-4k-instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T04:30:10Z
--- base_model: - microsoft/Phi-3-mini-128k-instruct - microsoft/Phi-3-mini-4k-instruct library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) * [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: microsoft/Phi-3-mini-4k-instruct layer_range: [0, 32] - model: microsoft/Phi-3-mini-128k-instruct layer_range: [0, 32] merge_method: slerp base_model: microsoft/Phi-3-mini-4k-instruct parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
New-tutorial-GanGu-Chettri-Viral-Video/FULL.VIDEO.LINK.GanGu.Chettri.Viral.Video.Leaks.Official
New-tutorial-GanGu-Chettri-Viral-Video
2025-05-26T04:27:01Z
0
0
null
[ "region:us" ]
null
2025-05-26T04:26:44Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
amandaa/AutoL2S-7b
amandaa
2025-05-26T04:22:17Z
0
0
null
[ "safetensors", "qwen2", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-05-25T21:28:11Z
--- license: apache-2.0 base_model: - Qwen/Qwen2.5-7B-Instruct --- # AutoL2S-7B This is the official model repository for **AutoL2S-7B**, a model fine-tuned for efficient reasoning based on [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/tree/main). ## 💡 Overview AutoL2S enables automatically switching between short and long reasoning paths based on input complexity. Auto Long-Short Reasoning (AutoL2S), a dynamic and model-agnostic framework that enables LLMs to dynamically compress their generated reasoning path based on the complexity of the reasoning question. AutoL2S enables a learned paradigm, in which LLMs themselves can decide when longer reasoning is necessary and when shorter reasoning suffices, by training on data annotated with our proposed method, which includes both long and short CoT paths and a special \<EASY\> token (\<specialLong\> in the implementation). We then use <EASY> token to indicate when the model can skip generating lengthy CoT reasoning. This proposed annotation strategy can enhance the LLMs’ ability to generate shorter CoT reasoning paths with improved quality after training. This repository contains: - Model weights - Configuration files - necessary scripts in the `examples/` directory <p align="left"> <img src="https://cdn-uploads.huggingface.co/production/uploads/66f9bb2dd5575ad6914756ce/dVpIjeIaU8Hv1M5z5VWYS.png" width="40%" style="display:inline-block; margin-right: 10px;" /> <img src="https://cdn-uploads.huggingface.co/production/uploads/66f9bb2dd5575ad6914756ce/qxHTE-ZGTpxVjmkIX6Fk-.png" width="40%" style="display:inline-block;" /> </p> --- ## 🧩 Dependencies We recommend using the model with [vLLM](https://github.com/vllm-project/vllm). The code has been tested with: ``` vLLM == 0.6.2 ``` --- ## 🚀 How to Use Run the inference example: ```bash cd examples python run_inference.py ``` Alternatively, **please download examples/prefixLLM.py and examples/template.py from this repository and put them in your working dir**. ```python from vllm import SamplingParams from prefixLLM import PrefixLLM from template import SYSTEM_PROMPT, SHORT_TRIGGER llm = PrefixLLM(model="amandaa/AutoL2S-7b") max_tokens, temp = 32768, 0.7 sampling_params_route = SamplingParams(max_tokens=max_tokens, temperature=temp, stop=["<specialLong>"], include_stop_str_in_output=True) sampling_params_force_think = SamplingParams(max_tokens=max_tokens, temperature=temp) question = "Convert the point $(0,3)$ in rectangular coordinates to polar coordinates. Enter your answer in the form $(r,\\theta),$ where $r > 0$ and $0 \\le \\theta < 2 \\pi.$" messages = [ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": question} ] responses = llm.route_chat(messages=messages, sampling_params_route=sampling_params_route, sampling_params_force_think=sampling_params_force_think, use_tqdm=True, trigger_word=SHORT_TRIGGER) print(SHORT_TRIGGER + responses[0].outputs[0].text) ``` --- ## 🔍 Citation If you use this model in your work, please consider citing: ```bibtex @misc{autol2s2025, title = {AutoL2S: Auto Long-Short Reasoning for Efficient Large Language Models}, author = {Luo, Feng* and Chuang, Yu-Neng* and Wang, Guanchu* and Le, Duy and Zhong, Shaochen and Liu, Hongyi and Yuan, Jiayi and Sui, Yang and Braverman, Vladimir and Chaudhary, Vipin and Hu, Xia}, journal={arXiv preprint}, year={2025} } ```
OpenVINO/phi-4-int4-ov
OpenVINO
2025-05-26T04:21:33Z
0
0
transformers
[ "transformers", "openvino", "phi3", "text-generation", "phi", "nlp", "math", "code", "chat", "conversational", "en", "base_model:microsoft/phi-4", "base_model:quantized:microsoft/phi-4", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-23T16:20:34Z
--- license: mit license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE base_model: - microsoft/phi-4 base_model_relation: quantized language: - en pipeline_tag: text-generation tags: - phi - nlp - math - code - chat - conversational library_name: transformers --- # phi-4-int4-ov * Model creator: [microsoft](https://huggingface.co/microsoft) * Original model: [phi-4](https://huggingface.co/microsoft/phi-4) ## Description This is [phi-4](https://huggingface.co/microsoft/phi-4) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf). ## Quantization Parameters Weight compression was performed using `nncf.compress_weights` with the following parameters: * mode: **INT4_ASYM** * ratio: **1.0** * group_size: **128** For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html). ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2025.1.0 and higher * Optimum Intel 1.24.0 and higher ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from transformers import AutoTokenizer from optimum.intel.openvino import OVModelForCausalLM model_id = "OpenVINO/phi-4-int4-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("What is OpenVINO?", return_tensors="pt") outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html). ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) 1. Install packages required for using OpenVINO GenAI. ``` pip install openvino-genai huggingface_hub ``` 2. Download model from HuggingFace Hub ``` import huggingface_hub as hf_hub model_id = "OpenVINO/phi-4-int4-ov" model_path = "phi-4-int4-ov" hf_hub.snapshot_download(model_id, local_dir=model_path) ``` 3. Run model inference: ``` import openvino_genai as ov_genai device = "CPU" pipe = ov_genai.LLMPipeline(model_path, device) print(pipe.generate("What is OpenVINO?", max_length=200)) ``` More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples) You can find more detaild usage examples in OpenVINO Notebooks: - [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM) - [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation) ## Limitations Check the original [model card](https://huggingface.co/microsoft/phi-4) for limitations. ## Legal information The original model is distributed under [mit](https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE) license. More details can be found in [phi-4](https://huggingface.co/microsoft/phi-4). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
mradermacher/distilgpt2-wiki-qa-i1-GGUF
mradermacher
2025-05-26T04:21:07Z
0
0
transformers
[ "transformers", "gguf", "gpt2", "en", "dataset:wiki_qa", "base_model:XBOT-RK/distilgpt2-wiki-qa", "base_model:quantized:XBOT-RK/distilgpt2-wiki-qa", "license:mit", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-26T03:46:34Z
--- base_model: XBOT-RK/distilgpt2-wiki-qa datasets: - wiki_qa language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - gpt2 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/XBOT-RK/distilgpt2-wiki-qa <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF/resolve/main/distilgpt2-wiki-qa.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
313707021-TING/qwen2.5-llm-reasoning
313707021-TING
2025-05-26T04:18:01Z
0
0
null
[ "safetensors", "qwen2", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-05-26T04:06:59Z
--- license: apache-2.0 base_model: - Qwen/Qwen2.5-7B-Instruct ---
FormlessAI/3dfd12f2-3878-4125-a86c-0a829cefbcbd
FormlessAI
2025-05-26T04:17:25Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:NousResearch/Hermes-2-Theta-Llama-3-8B", "base_model:finetune:NousResearch/Hermes-2-Theta-Llama-3-8B", "endpoints_compatible", "region:us" ]
null
2025-05-26T02:20:50Z
--- base_model: NousResearch/Hermes-2-Theta-Llama-3-8B library_name: transformers model_name: 3dfd12f2-3878-4125-a86c-0a829cefbcbd tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for 3dfd12f2-3878-4125-a86c-0a829cefbcbd This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/3dfd12f2-3878-4125-a86c-0a829cefbcbd", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/xxdx2zjm) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.3 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wuxia196/ppo-2m-LunarLander-v2
wuxia196
2025-05-26T04:16:26Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-26T04:16:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 283.57 +/- 17.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LuyiCui/DeepSeek-R1-Distill-Qwen-1.5B-DPO-2-2
LuyiCui
2025-05-26T04:14:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "dpo", "conversational", "dataset:LuyiCui/numina-deepseek-r1-qwen-7b-efficient-2-preference", "arxiv:2305.18290", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T02:37:30Z
--- datasets: LuyiCui/numina-deepseek-r1-qwen-7b-efficient-2-preference library_name: transformers model_name: DeepSeek-R1-Distill-Qwen-1.5B-DPO-2-2 tags: - generated_from_trainer - open-r1 - trl - dpo licence: license --- # Model Card for DeepSeek-R1-Distill-Qwen-1.5B-DPO-2-2 This model is a fine-tuned version of [None](https://huggingface.co/None) on the [LuyiCui/numina-deepseek-r1-qwen-7b-efficient-2-preference](https://huggingface.co/datasets/LuyiCui/numina-deepseek-r1-qwen-7b-efficient-2-preference) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="LuyiCui/DeepSeek-R1-Distill-Qwen-1.5B-DPO-2-2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cuiluyi/open-r1/runs/3j0s5whc) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
pangjin001/lora_model-llama-shigev2
pangjin001
2025-05-26T04:08:32Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-26T04:08:20Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** pangjin001 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Vortex5/LuckyRP-24B
Vortex5
2025-05-26T04:03:28Z
0
0
null
[ "safetensors", "mistral", "merge", "mergekit", "roleplay", "storytelling", "base_model:cognitivecomputations/Dolphin3.0-Mistral-24B", "base_model:merge:cognitivecomputations/Dolphin3.0-Mistral-24B", "base_model:trashpanda-org/MS-24B-Mullein-v0", "base_model:merge:trashpanda-org/MS-24B-Mullein-v0", "license:apache-2.0", "region:us" ]
null
2025-05-26T01:55:10Z
--- license: apache-2.0 tags: - merge - mergekit - roleplay - storytelling base_model: - trashpanda-org/MS-24B-Mullein-v0 - cognitivecomputations/Dolphin3.0-Mistral-24B --- # LuckyRP-24B LuckyRP-24B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [trashpanda-org/MS-24B-Mullein-v0](https://huggingface.co/trashpanda-org/MS-24B-Mullein-v0) * [cognitivecomputations/Dolphin3.0-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6669a3a617b838fda45637b8/qQpy13yAYpZHupUcWIocZ.png) ## Configuration: The following YAML configuration was used to produce this model: ```merge_method: slerp models: - model: trashpanda-org/MS-24B-Mullein-v0 parameters: weight: 0.7 - model: cognitivecomputations/Dolphin3.0-Mistral-24B parameters: weight: 0.3 base_model: trashpanda-org/MS-24B-Mullein-v0 tokenizer: source: base parameters: t: 0.3 normalize: true dtype: bfloat16 out_dtype: bfloat16 ```
mradermacher/distilgpt2-wiki-qa-GGUF
mradermacher
2025-05-26T04:01:58Z
0
0
transformers
[ "transformers", "gguf", "gpt2", "en", "dataset:wiki_qa", "base_model:XBOT-RK/distilgpt2-wiki-qa", "base_model:quantized:XBOT-RK/distilgpt2-wiki-qa", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-05-25T02:21:44Z
--- base_model: XBOT-RK/distilgpt2-wiki-qa datasets: - wiki_qa language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - gpt2 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/XBOT-RK/distilgpt2-wiki-qa <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/distilgpt2-wiki-qa-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-wiki-qa-GGUF/resolve/main/distilgpt2-wiki-qa.f16.gguf) | f16 | 0.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ajinkyapuar/nanoVLM
ajinkyapuar
2025-05-26T04:00:33Z
0
0
nanovlm
[ "nanovlm", "safetensors", "vision-language", "multimodal", "research", "image-text-to-text", "license:mit", "region:us" ]
image-text-to-text
2025-05-26T03:59:34Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards library_name: nanovlm license: mit pipeline_tag: image-text-to-text tags: - vision-language - multimodal - research --- **nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model. For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M. **Usage:** Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM. Follow the install instructions and run the following code: ```python from models.vision_language_model import VisionLanguageModel model = VisionLanguageModel.from_pretrained("ajinkyapuar/nanoVLM") ```
unsloth/DeepSeek-Prover-V2-671B
unsloth
2025-05-26T03:58:54Z
69
3
transformers
[ "transformers", "safetensors", "deepseek_v3", "text-generation", "deepseek", "unsloth", "conversational", "custom_code", "en", "base_model:deepseek-ai/DeepSeek-Prover-V2-671B", "base_model:quantized:deepseek-ai/DeepSeek-Prover-V2-671B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "fp8", "region:us" ]
text-generation
2025-04-30T13:08:43Z
--- base_model: deepseek-ai/DeepSeek-Prover-V2-671B language: - en library_name: transformers tags: - deepseek - unsloth - transformers license: mit --- <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-CODE" style="margin: 2px;"> <img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-MODEL" style="margin: 2px;"> <img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## 1. Introduction We introduce DeepSeek-Prover-V2, an open-source large language model designed for formal theorem proving in Lean 4, with initialization data collected through a recursive theorem proving pipeline powered by DeepSeek-V3. The cold-start training procedure begins by prompting DeepSeek-V3 to decompose complex problems into a series of subgoals. The proofs of resolved subgoals are synthesized into a chain-of-thought process, combined with DeepSeek-V3's step-by-step reasoning, to create an initial cold start for reinforcement learning. This process enables us to integrate both informal and formal mathematical reasoning into a unified model. <p align="center"> <img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Prover-V2/blob/main/figures/performance.png?raw=true"> </p> ## 2. Model Summary --- **Synthesize Cold-Start Reasoning Data through Recursive Proof Search** - To construct the cold-start dataset, we develop a simple yet effective pipeline for recursive theorem proving, utilizing DeepSeek-V3 as a unified tool for both subgoal decomposition and formalization. We prompt DeepSeek-V3 to decompose theorems into high-level proof sketches while simultaneously formalizing these proof steps in Lean 4, resulting in a sequence of subgoals. - We use a smaller 7B model to handle the proof search for each subgoal, thereby reducing the associated computational burden. Once the decomposed steps of a challenging problem are resolved, we pair the complete step-by-step formal proof with the corresponding chain-of-thought from DeepSeek-V3 to create cold-start reasoning data. --- **Reinforcement Learning with Synthetic Cold-Start Data** - We curate a subset of challenging problems that remain unsolved by the 7B prover model in an end-to-end manner, but for which all decomposed subgoals have been successfully resolved. By composing the proofs of all subgoals, we construct a complete formal proof for the original problem. This proof is then appended to DeepSeek-V3's chain-of-thought, which outlines the corresponding lemma decomposition, thereby producing a cohesive synthesis of informal reasoning and subsequent formalization. - After fine-tuning the prover model on the synthetic cold-start data, we perform a reinforcement learning stage to further enhance its ability to bridge informal reasoning with formal proof construction. Following the standard training objective for reasoning models, we use binary correct-or-incorrect feedback as the primary form of reward supervision. - The resulting model, DeepSeek-Prover-V2-671B, achieves state-of-the-art performance in neural theorem proving, reaching $88.9$% pass ratio on the MiniF2F-test and solving 49 out of 658 problems from PutnamBench. The proofs generated by DeepSeek-Prover-V2 for the miniF2F dataset are available for download as a [ZIP archive](https://github.com/deepseek-ai/DeepSeek-Prover-V2/blob/master/minif2f-solutions.zip). --- ## 3. ProverBench: Formalization of AIME and Textbook Problems we introduce ProverBench, a benchmark dataset comprising 325 problems. Of these, 15 are formalized from number theory and algebra questions featured in the recent AIME competitions (AIME 24 and 25), offering authentic high-school competition-level challenges. The remaining 310 problems are drawn from curated textbook examples and educational tutorials, contributing a diverse and pedagogically grounded collection of formalized mathematical problems. This benchmark is designed to enable more comprehensive evaluation across both high-school competition problems and undergraduate-level mathematics. <div align="center"> | Area | Count | | :---------------------: | :-------: | | AIME 24&25 | 15 | | Number Theory | 40 | | Elementary Algebra | 30 | | Linear Algebra | 50 | | Abstract Algebra | 40 | | Calculus | 90 | | Real Analysis | 30 | | Complex Analysis | 10 | | Functional Analysis | 10 | | Probability | 10 | | Total | 325 | </div> ## 4. Model & Dataset Downloads We release DeepSeek-Prover-V2 in two model sizes: 7B and 671B parameters. DeepSeek-Prover-V2-671B is trained on top of DeepSeek-V3-Base. DeepSeek-Prover-V2-7B is built upon DeepSeek-Prover-V1.5-Base and features an extended context length of up to 32K tokens. <div align="center"> | **Model** | **Download** | | :-----------------------------: | :----------------------------------------------------------: | | DeepSeek-Prover-V2-7B | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B) | | DeepSeek-Prover-V2-671B | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B) | </div> <div align="center"> | **Dataset** | **Download** | | :-----------------------------: | :----------------------------------------------------------: | | DeepSeek-ProverBench | [🤗 HuggingFace](https://huggingface.co/datasets/deepseek-ai/DeepSeek-ProverBench) | </div> ## 5. Quick Start You can directly use [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference. DeepSeek-Prover-V2-671B shares the same architecture as DeepSeek-V3. For detailed information and supported features, please refer to [the DeepSeek-V3 documentation on Hugging Face](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/deepseek_v3.md). The following is a basic example of generating a proof for a problem from the miniF2F dataset: ````python from transformers import AutoModelForCausalLM, AutoTokenizer import torch torch.manual_seed(30) model_id = "DeepSeek-Prover-V2-7B" # or DeepSeek-Prover-V2-671B tokenizer = AutoTokenizer.from_pretrained(model_id) formal_statement = """ import Mathlib import Aesop set_option maxHeartbeats 0 open BigOperators Real Nat Topology Rat /-- What is the positive difference between $120\%$ of 30 and $130\%$ of 20? Show that it is 10.-/ theorem mathd_algebra_10 : abs ((120 : ℝ) / 100 * 30 - 130 / 100 * 20) = 10 := by sorry """.strip() prompt = """ Complete the following Lean 4 code: ```lean4 {} ``` Before producing the Lean 4 code to formally prove the given theorem, provide a detailed proof plan outlining the main proof steps and strategies. The plan should highlight key ideas, intermediate lemmas, and proof structures that will guide the construction of the final formal proof. """.strip() chat = [ {"role": "user", "content": prompt.format(formal_statement)}, ] model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) inputs = tokenizer.apply_chat_template(chat, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device) import time start = time.time() outputs = model.generate(inputs, max_new_tokens=8192) print(tokenizer.batch_decode(outputs)) print(time.time() - start) ```` ## 6. License The use of DeepSeek-Prover-V2 models is subject to [the Model License](LICENSE-MODEL). ## 7. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
Oizys2517/AI_class_qw2.5-3B-instruction
Oizys2517
2025-05-26T03:58:54Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-26T03:58:54Z
--- license: apache-2.0 ---
youssefELK/judiciaireModwana
youssefELK
2025-05-26T03:57:52Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-26T03:57:42Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** youssefELK - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
opendiffusionai/sdxlone
opendiffusionai
2025-05-26T03:57:12Z
0
0
diffusers
[ "diffusers", "safetensors", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "diffusers:DummyValue", "region:us" ]
null
2025-05-25T18:58:10Z
--- base_model: - stabilityai/stable-diffusion-xl-base-1.0 - zer0int/LongCLIP-GmP-ViT-L-14 --- # opendiffusionAI sdxlONE (RAW version V0.0) ## What is this? This is the base SDXL model.. but the CLIP-L text encoder swapped out with "LongCLIP".... and then the CLIP-G sub-model removed. ### This is not sdxl-longcliponly This is very similar to https://huggingface.co/opendiffusionai/sdxl-longcliponly However, this version has the `text_encoder_2` and `tokenizer` models REMOVED. On the one hand, this makes it smaller by a few gigs. (Note that this model is currently fp32 precision) On the other hand, it requires a modified diffusers module to use, until [my PR](https://github.com/huggingface/diffusers/pull/11610) is accepted to the diffusers upstream code. ## Why is this? SDXL's largest limitations are primarily due to the lousy text CLIP(s) used. Not only are they of poor quality, but they have hidden token count limits, which make effective token count closer to 10. It is believed that one of the reasons CLIP-G was added on was to work around the limits of original CLIP-L. But.... that makes the model harder to train, and needlessly takes up more memory and time. So, I created this version to experimentally prove the better way. This allows use of up to 248 tokens with SDXL natively, without the layering hacks that some diffusion programs do. To see the difference this can make, see the example given at https://huggingface.co/opendiffusionai/sdxl-longcliponly # How to use pip install torch pip install git+https://github.com/ppbrown/diffusers-sdxlone@sdxl-fix python test-one.py This should generate an image in "testimg.png" demonstrating that it actually works. --- # How this was made I took sdxl-longcliponly and manually removed the unnecessary text models. I then also hand edited the model_config.json file
test-gen/qwen2-3b-easy_lr1e-5
test-gen
2025-05-26T03:56:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-26T03:43:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LandCruiser/sn29_cold_2605_4
LandCruiser
2025-05-26T03:56:38Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T01:55:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
manuross1/nbmafckdfll4k
manuross1
2025-05-26T03:48:50Z
3
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-25T04:20:39Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: nbmafckdfll4k --- # Nbmafckdfll4K <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `nbmafckdfll4k` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "nbmafckdfll4k", "lora_weights": "https://huggingface.co/manuross1/nbmafckdfll4k/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('manuross1/nbmafckdfll4k', weight_name='lora.safetensors') image = pipeline('nbmafckdfll4k').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/manuross1/nbmafckdfll4k/discussions) to add images that show off what you’ve made with this LoRA.
mci29/sn29_q2m3_endz
mci29
2025-05-26T03:47:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T03:43:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/gladiusprompt-vith-gpt2-GGUF
mradermacher
2025-05-26T03:44:47Z
2
0
transformers
[ "transformers", "gguf", "en", "base_model:RomeroRZ/gladiusprompt-vith-gpt2", "base_model:quantized:RomeroRZ/gladiusprompt-vith-gpt2", "endpoints_compatible", "region:us" ]
null
2025-05-25T02:19:10Z
--- base_model: RomeroRZ/gladiusprompt-vith-gpt2 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/RomeroRZ/gladiusprompt-vith-gpt2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.f16.gguf) | f16 | 0.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
test-gen/qwen2-1.5b-unique_lr1e-5
test-gen
2025-05-26T03:43:43Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-26T03:37:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ArtusDev/Delta-Vector_Sol-Reaver-15B-Instruct_EXL2_8.0bpw_H6
ArtusDev
2025-05-26T03:42:24Z
0
0
null
[ "safetensors", "mistral", "roleplay", "instruct", "creative_writing", "story-writing", "exl3", "dataset:Delta-Vector/Hydrus-Instruct-SmolTalk-V2", "dataset:Delta-Vector/Hydrus-SonnetOrca-V2", "dataset:Delta-Vector/Hydrus-FeedSum-ShareGPT", "dataset:Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt", "dataset:Delta-Vector/Hydrus-No_Robots-R1-Filtered", "dataset:Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt", "dataset:Delta-Vector/Hydrus-HelpSteer2", "dataset:Delta-Vector/Hydrus-R1-Thinking-Sharegpt", "dataset:Delta-Vector/Hydrus-Science-QA-sharegpt", "dataset:Delta-Vector/Hydrus-Claude-Instruct-2.7K", "dataset:Delta-Vector/Hydrus-Claude-Instruct-5K", "dataset:PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4", "dataset:PocketDoc/Dans-Toolmaxx-ShellCommands", "dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small", "dataset:PocketDoc/Dans-Logicmaxx-SAT-AP", "dataset:PocketDoc/Dans-Benchmaxx", "dataset:Nitral-AI/ARES-ShareGPT", "dataset:PocketDoc/Dans-Taskmaxx-TableGPT", "dataset:Delta-Vector/Ursa-Erebus-16K", "dataset:Delta-Vector/Ursa-Books-Light-Novels-V1", "dataset:NewEden/Orion-LIT", "dataset:Delta-Vector/Ursa-Asstr-V2-18k", "dataset:Delta-Vector/Ursa-Books-V2", "dataset:Delta-Vector/Ursa-Scribblehub-7k", "dataset:Delta-Vector/Ursa-Orion-EA-Comp-Filtered", "dataset:Delta-Vector/Ursa-HoneyFeed", "dataset:Delta-Vector/Ursa-Falling-through-the-world", "base_model:Delta-Vector/Sol-Reaver-15B-Instruct", "base_model:quantized:Delta-Vector/Sol-Reaver-15B-Instruct", "8-bit", "exl2", "region:us" ]
null
2025-05-26T02:50:07Z
--- datasets: - Delta-Vector/Hydrus-Instruct-SmolTalk-V2 - Delta-Vector/Hydrus-SonnetOrca-V2 - Delta-Vector/Hydrus-FeedSum-ShareGPT - Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt - Delta-Vector/Hydrus-No_Robots-R1-Filtered - Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt - Delta-Vector/Hydrus-HelpSteer2 - Delta-Vector/Hydrus-R1-Thinking-Sharegpt - Delta-Vector/Hydrus-Science-QA-sharegpt - Delta-Vector/Hydrus-Claude-Instruct-2.7K - Delta-Vector/Hydrus-Claude-Instruct-5K - PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4 - PocketDoc/Dans-Toolmaxx-ShellCommands - PocketDoc/Dans-MemoryCore-CoreCurriculum-Small - PocketDoc/Dans-Logicmaxx-SAT-AP - PocketDoc/Dans-Benchmaxx - Nitral-AI/ARES-ShareGPT - PocketDoc/Dans-Taskmaxx-TableGPT - Delta-Vector/Ursa-Erebus-16K - Delta-Vector/Ursa-Books-Light-Novels-V1 - NewEden/Orion-LIT - Delta-Vector/Ursa-Asstr-V2-18k - Delta-Vector/Ursa-Books-V2 - Delta-Vector/Ursa-Scribblehub-7k - Delta-Vector/Ursa-Orion-EA-Comp-Filtered - Delta-Vector/Ursa-HoneyFeed - Delta-Vector/Ursa-Falling-through-the-world base_model: - Delta-Vector/Sol-Reaver-15B-Instruct base_model_relation: quantized quantized_by: ArtusDev tags: - roleplay - instruct - creative_writing - story-writing - mistral - exl3 --- <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Sol-Reaver 15B</title> <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet"> <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #ffeef8 0%, #fff0e6 50%, #f8e8ff 100%); color: #8b4a6b; margin: 0; padding: 0; font-size: 16px; min-height: 100vh; } .container { margin: 20px; background: linear-gradient(145deg, rgba(255, 255, 255, 0.9), rgba(255, 245, 250, 0.95)); padding: 30px; border-radius: 20px; box-shadow: 0 8px 32px rgba(255, 182, 193, 0.3), 0 4px 16px rgba(255, 215, 0, 0.2); border: 2px solid rgba(255, 182, 193, 0.4); position: relative; backdrop-filter: blur(10px); } .container::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(255, 192, 203, 0.1), rgba(255, 215, 0, 0.1), rgba(221, 160, 221, 0.1)); border-radius: 20px; z-index: -1; } .header h1 { font-size: 32px; background: linear-gradient(45deg, #d63384, #fd7e14, #e91e63); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; margin: 0 0 20px 0; text-align: center; font-weight: 600; text-shadow: 0 2px 4px rgba(255, 182, 193, 0.3); } .section { margin-top: 30px; } .section h2 { font-size: 24px; background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; text-align: center; font-weight: 600; margin-bottom: 20px; } .info p { color: #8b4a6b; line-height: 1.8; font-size: 16px; } .info img { width: 85%; border-radius: 15px; margin: 0 auto 15px; display: block; box-shadow: 0 8px 25px rgba(255, 182, 193, 0.4); border: 2px solid rgba(255, 192, 203, 0.5); } a { color: #d63384; text-decoration: none; transition: all 0.3s ease; font-weight: 500; } a:hover { color: #fd7e14; text-shadow: 0 0 8px rgba(255, 215, 0, 0.6); } .button { display: inline-block; background: linear-gradient(45deg, #ffb6c1, #ffd700); color: #8b4a6b; padding: 12px 24px; border-radius: 25px; cursor: pointer; text-decoration: none; transition: all 0.3s ease; border: 1px solid rgba(255, 182, 193, 0.5); font-weight: 500; } .button:hover { background: linear-gradient(45deg, #ff91a4, #ffed4e); box-shadow: 0 4px 15px rgba(255, 182, 193, 0.6); transform: translateY(-2px); } pre { background: linear-gradient(135deg, rgba(255, 240, 245, 0.8), rgba(255, 248, 220, 0.8)); padding: 20px; border-radius: 12px; overflow-x: auto; border: 1px solid rgba(255, 182, 193, 0.3); box-shadow: inset 0 2px 4px rgba(255, 182, 193, 0.2); } code { font-family: 'Courier New', monospace; color: #8b4a6b; } .info-card { background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9)); border: 2px solid rgba(255, 182, 193, 0.4); border-radius: 15px; overflow: hidden; box-shadow: 0 4px 20px rgba(255, 182, 193, 0.3); } .info-header { background: linear-gradient(135deg, rgba(255, 192, 203, 0.3), rgba(255, 215, 0, 0.2)); padding: 25px; border-bottom: 1px solid rgba(255, 182, 193, 0.3); } .info-header h3 { background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; margin: 0 0 15px 0; font-size: 22px; text-align: center; font-weight: 600; } .model-tags { display: flex; gap: 10px; flex-wrap: wrap; justify-content: center; } .model-tag { background: linear-gradient(45deg, rgba(255, 182, 193, 0.4), rgba(255, 215, 0, 0.3)); color: #8b4a6b; padding: 8px 16px; border-radius: 20px; font-size: 13px; border: 1px solid rgba(255, 182, 193, 0.5); font-weight: 500; box-shadow: 0 2px 8px rgba(255, 182, 193, 0.2); } .model-composition { padding: 25px; border-bottom: 1px solid rgba(255, 182, 193, 0.3); } .model-composition h4 { background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; margin: 0 0 20px 0; font-size: 18px; text-align: center; font-weight: 600; } .composition-list { list-style: none; padding: 0; margin: 0; display: grid; gap: 15px; } .composition-list li { color: #8b4a6b; display: flex; align-items: baseline; gap: 12px; padding: 10px; background: rgba(255, 240, 245, 0.5); border-radius: 8px; border-left: 4px solid #ffb6c1; } .model-component { font-weight: 600; min-width: 120px; } .model-description { padding: 25px; background: linear-gradient(135deg, rgba(255, 255, 255, 0.7), rgba(255, 240, 245, 0.8)); } .metrics-section { margin-bottom: 30px; } .metrics-section details { background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9)); border: 2px solid rgba(255, 182, 193, 0.4); border-radius: 12px; padding: 20px; margin-bottom: 20px; box-shadow: 0 4px 15px rgba(255, 182, 193, 0.2); } .metrics-section summary { background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; font-size: 18px; cursor: pointer; outline: none; padding: 8px 0; text-align: center; font-weight: 600; transition: all 0.3s ease; } .metrics-section summary:hover { text-shadow: 0 0 8px rgba(255, 215, 0, 0.6); } .creator-section { margin: 20px 0; text-align: center; } .creator-badge { display: inline-flex; align-items: center; background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9)); border: 2px solid rgba(255, 182, 193, 0.4); border-radius: 25px; padding: 15px 20px; box-shadow: 0 4px 15px rgba(255, 182, 193, 0.3); } .creator-label { color: #8b4a6b; font-size: 14px; margin-right: 10px; font-weight: 500; } .creator-link { display: flex; align-items: center; gap: 8px; color: #d63384; text-decoration: none; transition: all 0.3s ease; } .creator-name { font-weight: 600; } .creator-arrow { font-size: 16px; transition: transform 0.3s ease; } .creator-link:hover .creator-arrow { transform: translateX(4px); color: #fd7e14; } .creator-link:hover { color: #fd7e14; text-shadow: 0 0 8px rgba(255, 215, 0, 0.6); } .link-arrow { display: inline-block; transition: transform 0.3s ease; } a:hover .link-arrow { transform: translateX(3px); } .axolotl-container { display: flex; text-align: center; /* This is correctly applied to center the image itself */ justify-content: center; margin: 30px 0; } .axolotl-container img { max-width: 300px; border-radius: 15px; box-shadow: 0 6px 20px rgba(255, 182, 193, 0.4); border: 2px solid rgba(255, 192, 203, 0.5); transition: transform 0.3s ease; display: block; /* Make the image a block element */ margin: 0 auto; /* Center it horizontally within its parent */ } .axolotl-container img:hover { transform: scale(1.05); } </style> </head> <body> <div class="container"> <div class="header"> <h1>Sol Reaver 15B</h1> </div> <div class="info"> <img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/DYgyLUEaHAv9kTffBYH-F.jpeg" alt="Model banner"> <div style="text-align: center;"> <div class="creator-section"> <div class="creator-badge"> <span class="creator-label">Created by</span> <a href="https://huggingface.co/Delta-Vector" target="_blank" class="creator-link"> <span class="creator-name">Delta-Vector</span> <span class="creator-arrow">→</span> </a> </div> </div> <div class="model-info"> <h2>Model Information</h2> <div class="info-card"> <div class="info-header"> <h3>Sol-Reaver-15B-Instruct</h3> <div class="model-tags"> <span class="model-tag">15B parameters</span> <span class="model-tag">Creative / Fresh Prose</span> <span class="model-tag">Co-writing/Roleplay/Adventure Generalist</span> </div> </div> <div class="model-description"> <p>The first in the line of a New series of Roleplay / Adventure / Co-writer Models - Finetuned ontop of Sol-Reaver-15B-Pretrain</p> <p>This model has been trained on 200M tokens of high quality Instruct data, It's focus is to provide a base for further finetuning|Merging</p> <p>It's goal is to have refreshing Prose, Creativity, Good Instruct following and the *Brains*.</p> <p>Support me on Ko-Fi: https://ko-fi.com/deltavector</p> </div> </div> </div> <div class="section"> <h2>Quantized Versions</h2> <div class="info-card"> <div class="model-composition"> <h4>Available Downloads</h4> <ul class="composition-list"> <li><span class="model-component"><a href="" target="_blank">GGUF Format</a></span>For use with LLama.cpp & Forks(Coming Soon!)</li> <li><span class="model-component"><a href="" target="_blank">EXL2 Format</a></span>For use with TabbyAPI (Coming Soon!)</li> <li><span class="model-component"><a href="" target="_blank">EXL3 Format</a></span>For use with TabbyAPI (Slower on Ampere))</li> </ul> </div> </div> </div> <div class="section"> <h2>Prompting</h2> <p>Model has been tuned with the ChatML formatting. A typical input would look like this:</p> <pre><code>&lt;|im_start|&gt;user Hi there!&lt;|im_end|&gt; &lt;|im_start|&gt;assistant Nice to meet you!&lt;|im_end|&gt; &lt;|im_start|&gt;user Can I ask a question?&lt;|im_end|&gt; &lt;|im_start|&gt;assistant </code></pre> </div> <div class="section"> <h2>Samplers</h2> <p>For testing of this model, I used Temp=1, 0.1 Min-P.</p> <div class="metrics-section"> <details> <summary>See Axolotl Config</summary> <pre><code> https://files.catbox.moe/u9dakg.yml </code></pre> </details> </div> </div> <div class="section"> <h2>Training</h2> <p>The training was done for 2 epoch using 8 x <a href="https://www.nvidia.com/en-us/data-center/h200/">H200s</a> GPUs graciously provided by <a href="https://huggingface.co/kalomaze">Kalomaze</a> for the fine-tuning of the model.</p> <div class="axolotl-container"> <a href="https://github.com/OpenAccess-AI-Collective/axolotl" target="_blank"> <img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"> </a> </div> </div> <div class="section"> <h2>Credits</h2> <p>Thank you to <a href="https://huggingface.co/lucyknada">Lucy Knada</a>, <a href="https://huggingface.co/Ateron">Ateron</a>, <a href="https://huggingface.co/AliCat2">Alicat</a>, <a href="https://huggingface.co/intervitens">Intervitens</a>, <a href="https://huggingface.co/cgato">Cgato</a>, <a href="https://huggingface.co/kubernetes-bad">Kubernetes Bad</a> and the rest of <a href="https://huggingface.co/anthracite-org">Anthracite</a>.</p> </div> </div> </div> </body> </html>
ArtusDev/Delta-Vector_Sol-Reaver-15B-Instruct_EXL2_6.0bpw_H6
ArtusDev
2025-05-26T03:41:03Z
0
0
null
[ "safetensors", "mistral", "roleplay", "instruct", "creative_writing", "story-writing", "exl3", "dataset:Delta-Vector/Hydrus-Instruct-SmolTalk-V2", "dataset:Delta-Vector/Hydrus-SonnetOrca-V2", "dataset:Delta-Vector/Hydrus-FeedSum-ShareGPT", "dataset:Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt", "dataset:Delta-Vector/Hydrus-No_Robots-R1-Filtered", "dataset:Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt", "dataset:Delta-Vector/Hydrus-HelpSteer2", "dataset:Delta-Vector/Hydrus-R1-Thinking-Sharegpt", "dataset:Delta-Vector/Hydrus-Science-QA-sharegpt", "dataset:Delta-Vector/Hydrus-Claude-Instruct-2.7K", "dataset:Delta-Vector/Hydrus-Claude-Instruct-5K", "dataset:PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4", "dataset:PocketDoc/Dans-Toolmaxx-ShellCommands", "dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small", "dataset:PocketDoc/Dans-Logicmaxx-SAT-AP", "dataset:PocketDoc/Dans-Benchmaxx", "dataset:Nitral-AI/ARES-ShareGPT", "dataset:PocketDoc/Dans-Taskmaxx-TableGPT", "dataset:Delta-Vector/Ursa-Erebus-16K", "dataset:Delta-Vector/Ursa-Books-Light-Novels-V1", "dataset:NewEden/Orion-LIT", "dataset:Delta-Vector/Ursa-Asstr-V2-18k", "dataset:Delta-Vector/Ursa-Books-V2", "dataset:Delta-Vector/Ursa-Scribblehub-7k", "dataset:Delta-Vector/Ursa-Orion-EA-Comp-Filtered", "dataset:Delta-Vector/Ursa-HoneyFeed", "dataset:Delta-Vector/Ursa-Falling-through-the-world", "base_model:Delta-Vector/Sol-Reaver-15B-Instruct", "base_model:quantized:Delta-Vector/Sol-Reaver-15B-Instruct", "6-bit", "exl2", "region:us" ]
null
2025-05-26T02:48:22Z
--- datasets: - Delta-Vector/Hydrus-Instruct-SmolTalk-V2 - Delta-Vector/Hydrus-SonnetOrca-V2 - Delta-Vector/Hydrus-FeedSum-ShareGPT - Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt - Delta-Vector/Hydrus-No_Robots-R1-Filtered - Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt - Delta-Vector/Hydrus-HelpSteer2 - Delta-Vector/Hydrus-R1-Thinking-Sharegpt - Delta-Vector/Hydrus-Science-QA-sharegpt - Delta-Vector/Hydrus-Claude-Instruct-2.7K - Delta-Vector/Hydrus-Claude-Instruct-5K - PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4 - PocketDoc/Dans-Toolmaxx-ShellCommands - PocketDoc/Dans-MemoryCore-CoreCurriculum-Small - PocketDoc/Dans-Logicmaxx-SAT-AP - PocketDoc/Dans-Benchmaxx - Nitral-AI/ARES-ShareGPT - PocketDoc/Dans-Taskmaxx-TableGPT - Delta-Vector/Ursa-Erebus-16K - Delta-Vector/Ursa-Books-Light-Novels-V1 - NewEden/Orion-LIT - Delta-Vector/Ursa-Asstr-V2-18k - Delta-Vector/Ursa-Books-V2 - Delta-Vector/Ursa-Scribblehub-7k - Delta-Vector/Ursa-Orion-EA-Comp-Filtered - Delta-Vector/Ursa-HoneyFeed - Delta-Vector/Ursa-Falling-through-the-world base_model: - Delta-Vector/Sol-Reaver-15B-Instruct base_model_relation: quantized quantized_by: ArtusDev tags: - roleplay - instruct - creative_writing - story-writing - mistral - exl3 --- <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Sol-Reaver 15B</title> <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet"> <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #ffeef8 0%, #fff0e6 50%, #f8e8ff 100%); color: #8b4a6b; margin: 0; padding: 0; font-size: 16px; min-height: 100vh; } .container { margin: 20px; background: linear-gradient(145deg, rgba(255, 255, 255, 0.9), rgba(255, 245, 250, 0.95)); padding: 30px; border-radius: 20px; box-shadow: 0 8px 32px rgba(255, 182, 193, 0.3), 0 4px 16px rgba(255, 215, 0, 0.2); border: 2px solid rgba(255, 182, 193, 0.4); position: relative; backdrop-filter: blur(10px); } .container::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(255, 192, 203, 0.1), rgba(255, 215, 0, 0.1), rgba(221, 160, 221, 0.1)); border-radius: 20px; z-index: -1; } .header h1 { font-size: 32px; background: linear-gradient(45deg, #d63384, #fd7e14, #e91e63); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; margin: 0 0 20px 0; text-align: center; font-weight: 600; text-shadow: 0 2px 4px rgba(255, 182, 193, 0.3); } .section { margin-top: 30px; } .section h2 { font-size: 24px; background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; text-align: center; font-weight: 600; margin-bottom: 20px; } .info p { color: #8b4a6b; line-height: 1.8; font-size: 16px; } .info img { width: 85%; border-radius: 15px; margin: 0 auto 15px; display: block; box-shadow: 0 8px 25px rgba(255, 182, 193, 0.4); border: 2px solid rgba(255, 192, 203, 0.5); } a { color: #d63384; text-decoration: none; transition: all 0.3s ease; font-weight: 500; } a:hover { color: #fd7e14; text-shadow: 0 0 8px rgba(255, 215, 0, 0.6); } .button { display: inline-block; background: linear-gradient(45deg, #ffb6c1, #ffd700); color: #8b4a6b; padding: 12px 24px; border-radius: 25px; cursor: pointer; text-decoration: none; transition: all 0.3s ease; border: 1px solid rgba(255, 182, 193, 0.5); font-weight: 500; } .button:hover { background: linear-gradient(45deg, #ff91a4, #ffed4e); box-shadow: 0 4px 15px rgba(255, 182, 193, 0.6); transform: translateY(-2px); } pre { background: linear-gradient(135deg, rgba(255, 240, 245, 0.8), rgba(255, 248, 220, 0.8)); padding: 20px; border-radius: 12px; overflow-x: auto; border: 1px solid rgba(255, 182, 193, 0.3); box-shadow: inset 0 2px 4px rgba(255, 182, 193, 0.2); } code { font-family: 'Courier New', monospace; color: #8b4a6b; } .info-card { background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9)); border: 2px solid rgba(255, 182, 193, 0.4); border-radius: 15px; overflow: hidden; box-shadow: 0 4px 20px rgba(255, 182, 193, 0.3); } .info-header { background: linear-gradient(135deg, rgba(255, 192, 203, 0.3), rgba(255, 215, 0, 0.2)); padding: 25px; border-bottom: 1px solid rgba(255, 182, 193, 0.3); } .info-header h3 { background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; margin: 0 0 15px 0; font-size: 22px; text-align: center; font-weight: 600; } .model-tags { display: flex; gap: 10px; flex-wrap: wrap; justify-content: center; } .model-tag { background: linear-gradient(45deg, rgba(255, 182, 193, 0.4), rgba(255, 215, 0, 0.3)); color: #8b4a6b; padding: 8px 16px; border-radius: 20px; font-size: 13px; border: 1px solid rgba(255, 182, 193, 0.5); font-weight: 500; box-shadow: 0 2px 8px rgba(255, 182, 193, 0.2); } .model-composition { padding: 25px; border-bottom: 1px solid rgba(255, 182, 193, 0.3); } .model-composition h4 { background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; margin: 0 0 20px 0; font-size: 18px; text-align: center; font-weight: 600; } .composition-list { list-style: none; padding: 0; margin: 0; display: grid; gap: 15px; } .composition-list li { color: #8b4a6b; display: flex; align-items: baseline; gap: 12px; padding: 10px; background: rgba(255, 240, 245, 0.5); border-radius: 8px; border-left: 4px solid #ffb6c1; } .model-component { font-weight: 600; min-width: 120px; } .model-description { padding: 25px; background: linear-gradient(135deg, rgba(255, 255, 255, 0.7), rgba(255, 240, 245, 0.8)); } .metrics-section { margin-bottom: 30px; } .metrics-section details { background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9)); border: 2px solid rgba(255, 182, 193, 0.4); border-radius: 12px; padding: 20px; margin-bottom: 20px; box-shadow: 0 4px 15px rgba(255, 182, 193, 0.2); } .metrics-section summary { background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; font-size: 18px; cursor: pointer; outline: none; padding: 8px 0; text-align: center; font-weight: 600; transition: all 0.3s ease; } .metrics-section summary:hover { text-shadow: 0 0 8px rgba(255, 215, 0, 0.6); } .creator-section { margin: 20px 0; text-align: center; } .creator-badge { display: inline-flex; align-items: center; background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9)); border: 2px solid rgba(255, 182, 193, 0.4); border-radius: 25px; padding: 15px 20px; box-shadow: 0 4px 15px rgba(255, 182, 193, 0.3); } .creator-label { color: #8b4a6b; font-size: 14px; margin-right: 10px; font-weight: 500; } .creator-link { display: flex; align-items: center; gap: 8px; color: #d63384; text-decoration: none; transition: all 0.3s ease; } .creator-name { font-weight: 600; } .creator-arrow { font-size: 16px; transition: transform 0.3s ease; } .creator-link:hover .creator-arrow { transform: translateX(4px); color: #fd7e14; } .creator-link:hover { color: #fd7e14; text-shadow: 0 0 8px rgba(255, 215, 0, 0.6); } .link-arrow { display: inline-block; transition: transform 0.3s ease; } a:hover .link-arrow { transform: translateX(3px); } .axolotl-container { display: flex; text-align: center; /* This is correctly applied to center the image itself */ justify-content: center; margin: 30px 0; } .axolotl-container img { max-width: 300px; border-radius: 15px; box-shadow: 0 6px 20px rgba(255, 182, 193, 0.4); border: 2px solid rgba(255, 192, 203, 0.5); transition: transform 0.3s ease; display: block; /* Make the image a block element */ margin: 0 auto; /* Center it horizontally within its parent */ } .axolotl-container img:hover { transform: scale(1.05); } </style> </head> <body> <div class="container"> <div class="header"> <h1>Sol Reaver 15B</h1> </div> <div class="info"> <img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/DYgyLUEaHAv9kTffBYH-F.jpeg" alt="Model banner"> <div style="text-align: center;"> <div class="creator-section"> <div class="creator-badge"> <span class="creator-label">Created by</span> <a href="https://huggingface.co/Delta-Vector" target="_blank" class="creator-link"> <span class="creator-name">Delta-Vector</span> <span class="creator-arrow">→</span> </a> </div> </div> <div class="model-info"> <h2>Model Information</h2> <div class="info-card"> <div class="info-header"> <h3>Sol-Reaver-15B-Instruct</h3> <div class="model-tags"> <span class="model-tag">15B parameters</span> <span class="model-tag">Creative / Fresh Prose</span> <span class="model-tag">Co-writing/Roleplay/Adventure Generalist</span> </div> </div> <div class="model-description"> <p>The first in the line of a New series of Roleplay / Adventure / Co-writer Models - Finetuned ontop of Sol-Reaver-15B-Pretrain</p> <p>This model has been trained on 200M tokens of high quality Instruct data, It's focus is to provide a base for further finetuning|Merging</p> <p>It's goal is to have refreshing Prose, Creativity, Good Instruct following and the *Brains*.</p> <p>Support me on Ko-Fi: https://ko-fi.com/deltavector</p> </div> </div> </div> <div class="section"> <h2>Quantized Versions</h2> <div class="info-card"> <div class="model-composition"> <h4>Available Downloads</h4> <ul class="composition-list"> <li><span class="model-component"><a href="" target="_blank">GGUF Format</a></span>For use with LLama.cpp & Forks(Coming Soon!)</li> <li><span class="model-component"><a href="" target="_blank">EXL2 Format</a></span>For use with TabbyAPI (Coming Soon!)</li> <li><span class="model-component"><a href="" target="_blank">EXL3 Format</a></span>For use with TabbyAPI (Slower on Ampere))</li> </ul> </div> </div> </div> <div class="section"> <h2>Prompting</h2> <p>Model has been tuned with the ChatML formatting. A typical input would look like this:</p> <pre><code>&lt;|im_start|&gt;user Hi there!&lt;|im_end|&gt; &lt;|im_start|&gt;assistant Nice to meet you!&lt;|im_end|&gt; &lt;|im_start|&gt;user Can I ask a question?&lt;|im_end|&gt; &lt;|im_start|&gt;assistant </code></pre> </div> <div class="section"> <h2>Samplers</h2> <p>For testing of this model, I used Temp=1, 0.1 Min-P.</p> <div class="metrics-section"> <details> <summary>See Axolotl Config</summary> <pre><code> https://files.catbox.moe/u9dakg.yml </code></pre> </details> </div> </div> <div class="section"> <h2>Training</h2> <p>The training was done for 2 epoch using 8 x <a href="https://www.nvidia.com/en-us/data-center/h200/">H200s</a> GPUs graciously provided by <a href="https://huggingface.co/kalomaze">Kalomaze</a> for the fine-tuning of the model.</p> <div class="axolotl-container"> <a href="https://github.com/OpenAccess-AI-Collective/axolotl" target="_blank"> <img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"> </a> </div> </div> <div class="section"> <h2>Credits</h2> <p>Thank you to <a href="https://huggingface.co/lucyknada">Lucy Knada</a>, <a href="https://huggingface.co/Ateron">Ateron</a>, <a href="https://huggingface.co/AliCat2">Alicat</a>, <a href="https://huggingface.co/intervitens">Intervitens</a>, <a href="https://huggingface.co/cgato">Cgato</a>, <a href="https://huggingface.co/kubernetes-bad">Kubernetes Bad</a> and the rest of <a href="https://huggingface.co/anthracite-org">Anthracite</a>.</p> </div> </div> </div> </body> </html>
ArtusDev/Delta-Vector_Sol-Reaver-15B-Instruct_EXL2_4.5bpw_H6
ArtusDev
2025-05-26T03:39:21Z
0
0
null
[ "safetensors", "mistral", "roleplay", "instruct", "creative_writing", "story-writing", "exl3", "dataset:Delta-Vector/Hydrus-Instruct-SmolTalk-V2", "dataset:Delta-Vector/Hydrus-SonnetOrca-V2", "dataset:Delta-Vector/Hydrus-FeedSum-ShareGPT", "dataset:Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt", "dataset:Delta-Vector/Hydrus-No_Robots-R1-Filtered", "dataset:Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt", "dataset:Delta-Vector/Hydrus-HelpSteer2", "dataset:Delta-Vector/Hydrus-R1-Thinking-Sharegpt", "dataset:Delta-Vector/Hydrus-Science-QA-sharegpt", "dataset:Delta-Vector/Hydrus-Claude-Instruct-2.7K", "dataset:Delta-Vector/Hydrus-Claude-Instruct-5K", "dataset:PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4", "dataset:PocketDoc/Dans-Toolmaxx-ShellCommands", "dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small", "dataset:PocketDoc/Dans-Logicmaxx-SAT-AP", "dataset:PocketDoc/Dans-Benchmaxx", "dataset:Nitral-AI/ARES-ShareGPT", "dataset:PocketDoc/Dans-Taskmaxx-TableGPT", "dataset:Delta-Vector/Ursa-Erebus-16K", "dataset:Delta-Vector/Ursa-Books-Light-Novels-V1", "dataset:NewEden/Orion-LIT", "dataset:Delta-Vector/Ursa-Asstr-V2-18k", "dataset:Delta-Vector/Ursa-Books-V2", "dataset:Delta-Vector/Ursa-Scribblehub-7k", "dataset:Delta-Vector/Ursa-Orion-EA-Comp-Filtered", "dataset:Delta-Vector/Ursa-HoneyFeed", "dataset:Delta-Vector/Ursa-Falling-through-the-world", "base_model:Delta-Vector/Sol-Reaver-15B-Instruct", "base_model:quantized:Delta-Vector/Sol-Reaver-15B-Instruct", "exl2", "region:us" ]
null
2025-05-26T02:45:22Z
--- datasets: - Delta-Vector/Hydrus-Instruct-SmolTalk-V2 - Delta-Vector/Hydrus-SonnetOrca-V2 - Delta-Vector/Hydrus-FeedSum-ShareGPT - Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt - Delta-Vector/Hydrus-No_Robots-R1-Filtered - Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt - Delta-Vector/Hydrus-HelpSteer2 - Delta-Vector/Hydrus-R1-Thinking-Sharegpt - Delta-Vector/Hydrus-Science-QA-sharegpt - Delta-Vector/Hydrus-Claude-Instruct-2.7K - Delta-Vector/Hydrus-Claude-Instruct-5K - PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4 - PocketDoc/Dans-Toolmaxx-ShellCommands - PocketDoc/Dans-MemoryCore-CoreCurriculum-Small - PocketDoc/Dans-Logicmaxx-SAT-AP - PocketDoc/Dans-Benchmaxx - Nitral-AI/ARES-ShareGPT - PocketDoc/Dans-Taskmaxx-TableGPT - Delta-Vector/Ursa-Erebus-16K - Delta-Vector/Ursa-Books-Light-Novels-V1 - NewEden/Orion-LIT - Delta-Vector/Ursa-Asstr-V2-18k - Delta-Vector/Ursa-Books-V2 - Delta-Vector/Ursa-Scribblehub-7k - Delta-Vector/Ursa-Orion-EA-Comp-Filtered - Delta-Vector/Ursa-HoneyFeed - Delta-Vector/Ursa-Falling-through-the-world base_model: - Delta-Vector/Sol-Reaver-15B-Instruct base_model_relation: quantized quantized_by: ArtusDev tags: - roleplay - instruct - creative_writing - story-writing - mistral - exl3 --- <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Sol-Reaver 15B</title> <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet"> <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #ffeef8 0%, #fff0e6 50%, #f8e8ff 100%); color: #8b4a6b; margin: 0; padding: 0; font-size: 16px; min-height: 100vh; } .container { margin: 20px; background: linear-gradient(145deg, rgba(255, 255, 255, 0.9), rgba(255, 245, 250, 0.95)); padding: 30px; border-radius: 20px; box-shadow: 0 8px 32px rgba(255, 182, 193, 0.3), 0 4px 16px rgba(255, 215, 0, 0.2); border: 2px solid rgba(255, 182, 193, 0.4); position: relative; backdrop-filter: blur(10px); } .container::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(255, 192, 203, 0.1), rgba(255, 215, 0, 0.1), rgba(221, 160, 221, 0.1)); border-radius: 20px; z-index: -1; } .header h1 { font-size: 32px; background: linear-gradient(45deg, #d63384, #fd7e14, #e91e63); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; margin: 0 0 20px 0; text-align: center; font-weight: 600; text-shadow: 0 2px 4px rgba(255, 182, 193, 0.3); } .section { margin-top: 30px; } .section h2 { font-size: 24px; background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; text-align: center; font-weight: 600; margin-bottom: 20px; } .info p { color: #8b4a6b; line-height: 1.8; font-size: 16px; } .info img { width: 85%; border-radius: 15px; margin: 0 auto 15px; display: block; box-shadow: 0 8px 25px rgba(255, 182, 193, 0.4); border: 2px solid rgba(255, 192, 203, 0.5); } a { color: #d63384; text-decoration: none; transition: all 0.3s ease; font-weight: 500; } a:hover { color: #fd7e14; text-shadow: 0 0 8px rgba(255, 215, 0, 0.6); } .button { display: inline-block; background: linear-gradient(45deg, #ffb6c1, #ffd700); color: #8b4a6b; padding: 12px 24px; border-radius: 25px; cursor: pointer; text-decoration: none; transition: all 0.3s ease; border: 1px solid rgba(255, 182, 193, 0.5); font-weight: 500; } .button:hover { background: linear-gradient(45deg, #ff91a4, #ffed4e); box-shadow: 0 4px 15px rgba(255, 182, 193, 0.6); transform: translateY(-2px); } pre { background: linear-gradient(135deg, rgba(255, 240, 245, 0.8), rgba(255, 248, 220, 0.8)); padding: 20px; border-radius: 12px; overflow-x: auto; border: 1px solid rgba(255, 182, 193, 0.3); box-shadow: inset 0 2px 4px rgba(255, 182, 193, 0.2); } code { font-family: 'Courier New', monospace; color: #8b4a6b; } .info-card { background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9)); border: 2px solid rgba(255, 182, 193, 0.4); border-radius: 15px; overflow: hidden; box-shadow: 0 4px 20px rgba(255, 182, 193, 0.3); } .info-header { background: linear-gradient(135deg, rgba(255, 192, 203, 0.3), rgba(255, 215, 0, 0.2)); padding: 25px; border-bottom: 1px solid rgba(255, 182, 193, 0.3); } .info-header h3 { background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; margin: 0 0 15px 0; font-size: 22px; text-align: center; font-weight: 600; } .model-tags { display: flex; gap: 10px; flex-wrap: wrap; justify-content: center; } .model-tag { background: linear-gradient(45deg, rgba(255, 182, 193, 0.4), rgba(255, 215, 0, 0.3)); color: #8b4a6b; padding: 8px 16px; border-radius: 20px; font-size: 13px; border: 1px solid rgba(255, 182, 193, 0.5); font-weight: 500; box-shadow: 0 2px 8px rgba(255, 182, 193, 0.2); } .model-composition { padding: 25px; border-bottom: 1px solid rgba(255, 182, 193, 0.3); } .model-composition h4 { background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; margin: 0 0 20px 0; font-size: 18px; text-align: center; font-weight: 600; } .composition-list { list-style: none; padding: 0; margin: 0; display: grid; gap: 15px; } .composition-list li { color: #8b4a6b; display: flex; align-items: baseline; gap: 12px; padding: 10px; background: rgba(255, 240, 245, 0.5); border-radius: 8px; border-left: 4px solid #ffb6c1; } .model-component { font-weight: 600; min-width: 120px; } .model-description { padding: 25px; background: linear-gradient(135deg, rgba(255, 255, 255, 0.7), rgba(255, 240, 245, 0.8)); } .metrics-section { margin-bottom: 30px; } .metrics-section details { background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9)); border: 2px solid rgba(255, 182, 193, 0.4); border-radius: 12px; padding: 20px; margin-bottom: 20px; box-shadow: 0 4px 15px rgba(255, 182, 193, 0.2); } .metrics-section summary { background: linear-gradient(45deg, #d63384, #fd7e14); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; font-size: 18px; cursor: pointer; outline: none; padding: 8px 0; text-align: center; font-weight: 600; transition: all 0.3s ease; } .metrics-section summary:hover { text-shadow: 0 0 8px rgba(255, 215, 0, 0.6); } .creator-section { margin: 20px 0; text-align: center; } .creator-badge { display: inline-flex; align-items: center; background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9)); border: 2px solid rgba(255, 182, 193, 0.4); border-radius: 25px; padding: 15px 20px; box-shadow: 0 4px 15px rgba(255, 182, 193, 0.3); } .creator-label { color: #8b4a6b; font-size: 14px; margin-right: 10px; font-weight: 500; } .creator-link { display: flex; align-items: center; gap: 8px; color: #d63384; text-decoration: none; transition: all 0.3s ease; } .creator-name { font-weight: 600; } .creator-arrow { font-size: 16px; transition: transform 0.3s ease; } .creator-link:hover .creator-arrow { transform: translateX(4px); color: #fd7e14; } .creator-link:hover { color: #fd7e14; text-shadow: 0 0 8px rgba(255, 215, 0, 0.6); } .link-arrow { display: inline-block; transition: transform 0.3s ease; } a:hover .link-arrow { transform: translateX(3px); } .axolotl-container { display: flex; text-align: center; /* This is correctly applied to center the image itself */ justify-content: center; margin: 30px 0; } .axolotl-container img { max-width: 300px; border-radius: 15px; box-shadow: 0 6px 20px rgba(255, 182, 193, 0.4); border: 2px solid rgba(255, 192, 203, 0.5); transition: transform 0.3s ease; display: block; /* Make the image a block element */ margin: 0 auto; /* Center it horizontally within its parent */ } .axolotl-container img:hover { transform: scale(1.05); } </style> </head> <body> <div class="container"> <div class="header"> <h1>Sol Reaver 15B</h1> </div> <div class="info"> <img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/DYgyLUEaHAv9kTffBYH-F.jpeg" alt="Model banner"> <div style="text-align: center;"> <div class="creator-section"> <div class="creator-badge"> <span class="creator-label">Created by</span> <a href="https://huggingface.co/Delta-Vector" target="_blank" class="creator-link"> <span class="creator-name">Delta-Vector</span> <span class="creator-arrow">→</span> </a> </div> </div> <div class="model-info"> <h2>Model Information</h2> <div class="info-card"> <div class="info-header"> <h3>Sol-Reaver-15B-Instruct</h3> <div class="model-tags"> <span class="model-tag">15B parameters</span> <span class="model-tag">Creative / Fresh Prose</span> <span class="model-tag">Co-writing/Roleplay/Adventure Generalist</span> </div> </div> <div class="model-description"> <p>The first in the line of a New series of Roleplay / Adventure / Co-writer Models - Finetuned ontop of Sol-Reaver-15B-Pretrain</p> <p>This model has been trained on 200M tokens of high quality Instruct data, It's focus is to provide a base for further finetuning|Merging</p> <p>It's goal is to have refreshing Prose, Creativity, Good Instruct following and the *Brains*.</p> <p>Support me on Ko-Fi: https://ko-fi.com/deltavector</p> </div> </div> </div> <div class="section"> <h2>Quantized Versions</h2> <div class="info-card"> <div class="model-composition"> <h4>Available Downloads</h4> <ul class="composition-list"> <li><span class="model-component"><a href="" target="_blank">GGUF Format</a></span>For use with LLama.cpp & Forks(Coming Soon!)</li> <li><span class="model-component"><a href="" target="_blank">EXL2 Format</a></span>For use with TabbyAPI (Coming Soon!)</li> <li><span class="model-component"><a href="" target="_blank">EXL3 Format</a></span>For use with TabbyAPI (Slower on Ampere))</li> </ul> </div> </div> </div> <div class="section"> <h2>Prompting</h2> <p>Model has been tuned with the ChatML formatting. A typical input would look like this:</p> <pre><code>&lt;|im_start|&gt;user Hi there!&lt;|im_end|&gt; &lt;|im_start|&gt;assistant Nice to meet you!&lt;|im_end|&gt; &lt;|im_start|&gt;user Can I ask a question?&lt;|im_end|&gt; &lt;|im_start|&gt;assistant </code></pre> </div> <div class="section"> <h2>Samplers</h2> <p>For testing of this model, I used Temp=1, 0.1 Min-P.</p> <div class="metrics-section"> <details> <summary>See Axolotl Config</summary> <pre><code> https://files.catbox.moe/u9dakg.yml </code></pre> </details> </div> </div> <div class="section"> <h2>Training</h2> <p>The training was done for 2 epoch using 8 x <a href="https://www.nvidia.com/en-us/data-center/h200/">H200s</a> GPUs graciously provided by <a href="https://huggingface.co/kalomaze">Kalomaze</a> for the fine-tuning of the model.</p> <div class="axolotl-container"> <a href="https://github.com/OpenAccess-AI-Collective/axolotl" target="_blank"> <img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"> </a> </div> </div> <div class="section"> <h2>Credits</h2> <p>Thank you to <a href="https://huggingface.co/lucyknada">Lucy Knada</a>, <a href="https://huggingface.co/Ateron">Ateron</a>, <a href="https://huggingface.co/AliCat2">Alicat</a>, <a href="https://huggingface.co/intervitens">Intervitens</a>, <a href="https://huggingface.co/cgato">Cgato</a>, <a href="https://huggingface.co/kubernetes-bad">Kubernetes Bad</a> and the rest of <a href="https://huggingface.co/anthracite-org">Anthracite</a>.</p> </div> </div> </div> </body> </html>
JavaneseHonorifics/Unggah-Ungguh-Javanese-Distilbert-Classifier
JavaneseHonorifics
2025-05-26T03:35:54Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "jv", "dataset:JavaneseHonorifics/Unggah-Ungguh", "arxiv:2502.20864", "base_model:w11wo/javanese-distilbert-small-imdb", "base_model:finetune:w11wo/javanese-distilbert-small-imdb", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-26T03:13:34Z
--- license: cc-by-nc-4.0 language: - jv datasets: - JavaneseHonorifics/Unggah-Ungguh base_model: - w11wo/javanese-distilbert-small-imdb pipeline_tag: text-classification library_name: transformers --- # Unggah-Ungguh-Javanese-Distilbert-Classifier Unggah-Ungguh-Javanese-Distilbert-Classifier is part of the Unggah-Ungguh's model family, a classifier model for Javanese Honorific Classification task that was mentioned in "Do Language Models Understand Honorific Systems in Javanese?". Check out [our paper](https://arxiv.org/abs/2502.20864) for more information! ## Model description - **Model type**: A classifier model trained on a highly curated Unggah-Ungguh dataset that represent Javanese Honorific rules and systems. - **Language(s) NLP**: Javanese - **License:** CC-BY-NC 4.0 - **Finetuned from model:** w11wo/javanese-distilbert-small-imdb ## Model Sources - **Project Page:** https://javanesehonorifics.github.io/ - **Repository:** https://github.com/JavaneseHonorifics - **Paper:** https://arxiv.org/abs/2502.20864 ## Using the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_path = "JavaneseHonorifics/Unggah-Ungguh-Javanese-Distilbert-Classifier" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_pretrained(model_path) INPUT_TEXT = "Mbak Srini mangan pecel ajange pincuk" tokenized_input = tokenizer([INPUT_TEXT], return_tensors="pt", truncation=True, padding=True) with torch.no_grad(): outputs = model(**tokenized_input) y_pred = outputs.logits.argmax(-1) print("Predicted class:", y_pred.item()) ``` ## License and Use Unggah-Ungguh is licensed under the CC-BY-NC 4.0 ## Citation ```bibtex @article{farhansyah2025language, title={Do Language Models Understand Honorific Systems in Javanese?}, author={Farhansyah, Mohammad Rifqi and Darmawan, Iwan and Kusumawardhana, Adryan and Winata, Genta Indra and Aji, Alham Fikri and Wijaya, Derry Tanti}, journal={arXiv preprint arXiv:2502.20864}, year={2025} } ```
mci29/sn29_s0m2_enfh
mci29
2025-05-26T03:31:49Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-26T03:28:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Gen-Verse/ReasonFlux-V2-32B
Gen-Verse
2025-05-26T03:31:15Z
0
3
null
[ "safetensors", "region:us" ]
null
2025-05-25T14:41:09Z
<div align="center"> <h1>ReasonFlux-V2:Internalizing Template-Augmented LLM Reasoning with Hierarchical Reinforcement Learning</h1> </div> <p align="center"> <img src="./figs/comparison.png" width=80%> </p> **ReasonFlux-V2** is our new template-augmented reasoning paradigm which **internalize the thought templates** through **iterative hierarchical reinforcement learning**. Specifically, we first develop an automated pipeline to extract thought templates from the problem–solution pairs in training set. To effectively internalize these high-level thought templates and learning a more efficient reasoning paradigm, we propose two collaborative modules: **Template Proposer** which adaptively proposes suitable thought templates based on the input problem; and **Template Reasoner**,which exactly instantiates the proposed templates and performs precise, detailed reasoning. Building upon these modules, we iteratively conduct **hierarchical RL** on optimizing both modules. <p align="center"> <img src="./figs/ReasonFluxv2_method.png" width=80%> </p> **ReasonFlux-V2** offers a more efficient, generalizable solution for enhancing the complex reasoning capabilities of LLMs. Compare with conventional reasoning LLMs, our **ReasonFlux-V2** could correctly and efficiently solve the problems with less token consumption and inference time. **We will release our paper related with ReasonFlux-V2 soon.** ReasonFlux-v2 consists of two main modules: 1. **Template Proposer**, which **adaptively** proposes suitable high-level thought templates based on the input problem. It functions as intuitive thinking process of human which helps to **narrow the exploration space** of detailed reasoning process thus **improve the solution efficiency**. 2. **Template Reasoner**, which follow the proposed high-level thought template to efficiently and effectively solve the corresponding problem. <p align="center"> <img src="./figs/reasonflux_v2.png" width=80%> </p> Paper (We will soon release)|[Code](https://github.com/Gen-Verse/ReasonFlux)|[Template](Gen-Verse/ReasonFlux-V2-Template)|[SFT Dataset](https://huggingface.co/datasets/Gen-Verse/ReasonFlux-V2-SFT/) |[DPO Dataset](https://huggingface.co/datasets/Gen-Verse/ReasonFlux-V2-DPO)
ava-mitch/test_girl02
ava-mitch
2025-05-26T03:28:28Z
0
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-26T03:28:21Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: girl license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # test_girl02 <Gallery /> ## Model description ## Trigger words You should use `girl` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/ava-mitch/test_girl02/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
RayneAmes/justinbieber_v2
RayneAmes
2025-05-26T03:26:12Z
0
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-02-23T05:25:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jaisalmer-viral-video-Link/Jaisalmer-viral-video-Download-Link-Trending-2025-Free-Android
Jaisalmer-viral-video-Link
2025-05-26T03:23:29Z
0
0
adapter-transformers
[ "adapter-transformers", "legal", "aa", "dataset:nvidia/OpenMathReasoning", "base_model:nari-labs/Dia-1.6B", "base_model:adapter:nari-labs/Dia-1.6B", "license:apache-2.0", "region:us" ]
null
2025-05-26T03:08:41Z
--- license: apache-2.0 datasets: - nvidia/OpenMathReasoning language: - aa base_model: - nari-labs/Dia-1.6B library_name: adapter-transformers tags: - legal --- <a rel="nofollow" href="https://t.me/Indiasexygirl2025">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️​</a> In the age of social media, it only takes one moment for a video to go viral—and this time, that moment happened in the golden city of Jaisalmer, Rajasthan. The Jaisalmer viral video has captured widespread attention across India and beyond, sparking discussions, debates, and curiosity across platforms like Twitter, Instagram, and Telegram. Now, you can download the Jaisalmer viral video and see for yourself what the buzz is all about.
NTSG/gemma-3
NTSG
2025-05-26T03:22:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-26T03:21:42Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** NTSG - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/aaronGPTplus-i1-GGUF
mradermacher
2025-05-26T03:18:15Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:totallynotbrent/aaronGPTplus", "base_model:quantized:totallynotbrent/aaronGPTplus", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-26T02:40:29Z
--- base_model: totallynotbrent/aaronGPTplus language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/totallynotbrent/aaronGPTplus <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/aaronGPTplus-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ3_S.gguf) | i1-IQ3_S | 0.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ3_M.gguf) | i1-IQ3_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q4_0.gguf) | i1-Q4_0 | 0.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q4_1.gguf) | i1-Q4_1 | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q6_K.gguf) | i1-Q6_K | 0.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
DZABALA/roberta-base-bne-platzi-project-nlp-con-transformers
DZABALA
2025-05-26T03:17:17Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:PlanTL-GOB-ES/roberta-base-bne", "base_model:finetune:PlanTL-GOB-ES/roberta-base-bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-26T01:37:27Z
--- library_name: transformers license: apache-2.0 base_model: PlanTL-GOB-ES/roberta-base-bne tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-bne-platzi-project-nlp-con-transformers results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-platzi-project-nlp-con-transformers This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4551 - Accuracy: 0.865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3514 | 1.0 | 438 | 0.4019 | 0.854 | | 0.2282 | 2.0 | 876 | 0.4551 | 0.865 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0+cpu - Datasets 3.6.0 - Tokenizers 0.21.1
RayneAmes/marill_v1
RayneAmes
2025-05-26T03:10:26Z
0
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-02-25T22:27:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]