modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
LaaP-ai/donut-base-invoicev1.26
LaaP-ai
2025-06-20T07:40:33Z
0
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-20T07:40:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hsinyen5/GRPO_Model
hsinyen5
2025-06-20T07:37:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T07:37:19Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** hsinyen5 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sergioalves/59fca377-5731-4b17-9fd2-29c1fa046403
sergioalves
2025-06-20T07:36:16Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/tinyllama-chat", "base_model:adapter:unsloth/tinyllama-chat", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-20T07:20:07Z
--- library_name: peft license: apache-2.0 base_model: unsloth/tinyllama-chat tags: - axolotl - generated_from_trainer model-index: - name: 59fca377-5731-4b17-9fd2-29c1fa046403 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/tinyllama-chat bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 913771fcbbd6da83_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_input: input field_instruction: instruct field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.05 enabled: true group_by_length: false rank_loss: true reference_model: NousResearch/Meta-Llama-3-8B-Instruct early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: sergioalves/59fca377-5731-4b17-9fd2-29c1fa046403 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-07 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/913771fcbbd6da83_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 701d2779-79e0-4a6d-a071-9eb5354e95f2 wandb_project: s56-7 wandb_run: your_name wandb_runid: 701d2779-79e0-4a6d-a071-9eb5354e95f2 warmup_steps: 25 weight_decay: 0.05 xformers_attention: false ``` </details><br> # 59fca377-5731-4b17-9fd2-29c1fa046403 This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 25 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3418 | 0.0002 | 1 | 1.3820 | | 1.3024 | 0.0219 | 100 | 1.3795 | | 1.4117 | 0.0437 | 200 | 1.3783 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
FFFFFeee/ppo-Huggy
FFFFFeee
2025-06-20T07:35:47Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-06-20T07:35:41Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: FFFFFeee/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ReRaWo/Sentinel
ReRaWo
2025-06-20T07:35:05Z
0
1
null
[ "logistic_regression", "context compression", "sentence selection", "probing classifier", "attention probing", "RAG", "LongBench", "text-classification", "en", "zh", "arxiv:2505.23277", "license:apache-2.0", "region:us" ]
text-classification
2025-06-19T08:06:00Z
--- license: apache-2.0 language: - en - zh tags: - context compression - sentence selection - probing classifier - attention probing - RAG - LongBench pipeline_tag: text-classification --- # Sentinel Probing Classifier (Logistic Regression) This repository contains the sentence-level classifier used in **Sentinel**, a lightweight context compression framework introduced in our paper: > **Sentinel: Attention Probing of Proxy Models for LLM Context Compression with an Understanding Perspective** > Yong Zhang, Yanwen Huang, Ning Cheng, Yang Guo, Yun Zhu, Yanmeng Wang, Shaojun Wang, Jing Xiao > 📄 [Paper (Arxiv 2025)](https://arxiv.org/abs/2505.23277) | 💻 [Code on GitHub](https://github.com/yzhangchuck/Sentinel) --- ## 🧠 What is Sentinel? **Sentinel** reframes LLM context compression as a lightweight attention-based *understanding* task. Instead of fine-tuning a full compression model, it: - Extracts **decoder attention** from a small proxy LLM (e.g., Qwen-2.5-0.5B) - Computes **sentence-level attention features** - Applies a **logistic regression (LR) classifier** to select relevant sentences This approach is efficient, model-agnostic, and highly interpretable. --- ## 📦 Files Included | File | Description | |-------------------------|----------------------------------------------| | `sentinel_lr_model.pkl` | Trained logistic regression classifier | | `sentinel_config.json` | Feature extraction configuration | --- ## 🚀 Usage Use this classifier on attention-derived feature vectors to predict sentence-level relevance scores: 🛠 Feature extraction code and full pipeline available at: 👉 https://github.com/yzhangchuck/Sentinel ## 📈 Benchmark Results <p align="center"> <img src="longbench_gpt35.png" alt="LongBench GPT-3.5 Results" width="750"/> </p> <p align="center"> <img src="longbench_qwen7b.png" alt="LongBench Qwen Results" width="750"/> </p> ## 📄 Citation Please cite us if you use this model: @misc{zhang2025sentinelattentionprobingproxy, title={Sentinel: Attention Probing of Proxy Models for LLM Context Compression with an Understanding Perspective}, author={Yong Zhang and Yanwen Huang and Ning Cheng and Yang Guo and Yun Zhu and Yanmeng Wang and Shaojun Wang and Jing Xiao}, year={2025}, eprint={2505.23277}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.23277}, } ## 📬 Contact • 📧 [email protected] • 🔗 Project: https://github.com/yzhangchuck/Sentinel ## 🔒 License Apache License 2.0 — Free for research and commercial use with attribution.
nnilayy/dreamer-valence-multi-classification-Kfold-2
nnilayy
2025-06-20T07:34:55Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-06-20T07:34:52Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
morturr/Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-2-seed-28-2025-06-20
morturr
2025-06-20T07:34:39Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T07:34:30Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-2-seed-28-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-2-seed-28-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
nickjelicic/Llama-3.1-8B-Instruct-FlatEarthSociety-user_2
nickjelicic
2025-06-20T07:29:51Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T07:26:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Moncyan/Med-U1-7B-medcalc
Moncyan
2025-06-20T07:28:13Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-20T07:28:13Z
--- license: apache-2.0 ---
ujjawal077/cyber-arabic-llama3
ujjawal077
2025-06-20T07:27:07Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T07:22:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zion74/ppo-Huggy
Zion74
2025-06-20T07:26:37Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-06-20T07:26:23Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Zion74/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ujjawal077/cyber-arabic-llama2
ujjawal077
2025-06-20T07:16:48Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T07:12:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johngreendr1/f53952c7-2357-4837-bce9-a9a13b87bad2
johngreendr1
2025-06-20T07:13:00Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:NousResearch/Nous-Capybara-7B-V1.9", "base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9", "region:us" ]
null
2025-06-20T07:12:51Z
--- base_model: NousResearch/Nous-Capybara-7B-V1.9 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
RedbeardNZ/HunyuanVideo-Avatar
RedbeardNZ
2025-06-20T07:11:57Z
0
0
null
[ "safetensors", "image-to-video", "en", "arxiv:2505.20156", "region:us" ]
image-to-video
2025-06-20T07:11:55Z
--- pipeline_tag: image-to-video language: - en --- <!-- ## **HunyuanVideo-Avatar** --> <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/HDZpvr8F-UaHAHlsF--fh.png" height=100> </p> <div align="center"> <a href="https://github.com/Tencent-Hunyuan/HunyuanVideo-Avatar"><img src="https://img.shields.io/static/v1?label=HunyuanVideo-Avatar%20Code&message=Github&color=blue"></a> <a href="https://HunyuanVideo-Avatar.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"></a> <a href="https://hunyuan.tencent.com/modelSquare/home/play?modelId=126"><img src="https://img.shields.io/static/v1?label=Playground&message=Web&color=green"></a> <a href="https://arxiv.org/pdf/2505.20156"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red"></a> <a href="https://huggingface.co/tencent/HunyuanVideo-Avatar"><img src="https://img.shields.io/static/v1?label=HunyuanVideo-Avatar&message=HuggingFace&color=yellow"></a> </div> ![image](assets/teaser.png) > [**HunyuanVideo-Avatar: High-Fidelity Audio-Driven Human Animation for Multiple Characters**](https://arxiv.org/pdf/2505.20156) <be> ## **Abstract** Recent years have witnessed significant progress in audio-driven human animation. However, critical challenges remain in (i) generating highly dynamic videos while preserving character consistency, (ii) achieving precise emotion alignment between characters and audio, and (iii) enabling multi-character audio-driven animation. To address these challenges, we propose HunyuanVideo-Avatar, a multimodal diffusion transformer (MM-DiT)-based model capable of simultaneously generating dynamic, emotion-controllable, and multi-character dialogue videos. Concretely, HunyuanVideo-Avatar introduces three key innovations: (i) A character image injection module is designed to replace the conventional addition-based character conditioning scheme, eliminating the inherent condition mismatch between training and inference. This ensures the dynamic motion and strong character consistency; (ii) An Audio Emotion Module (AEM) is introduced to extract and transfer the emotional cues from an emotion reference image to the target generated video, enabling fine-grained and accurate emotion style control; (iii) A Face-Aware Audio Adapter (FAA) is proposed to isolate the audio-driven character with latent-level face mask, enabling independent audio injection via cross-attention for multi-character scenarios. These innovations empower HunyuanVideo-Avatar to surpass state-of-the-art methods on benchmark datasets and a newly proposed wild dataset, generating realistic avatars in dynamic, immersive scenarios. The source code and model weights will be released publicly. ## **HunyuanVideo-Avatar Overall Architecture** ![image](https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/SAQAlLLsEzC1fURoL89_C.png) We propose **HunyuanVideo-Avatar**, a multi-modal diffusion transformer(MM-DiT)-based model capable of generating **dynamic**, **emotion-controllable**, and **multi-character dialogue** videos. ## 🎉 **HunyuanVideo-Avatar Key Features** ![image](https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/RVM42NLlvlwABiQNlTLdd.png) ### **High-Dynamic and Emotion-Controllable Video Generation** HunyuanVideo-Avatar supports animating any input **avatar images** to **high-dynamic** and **emotion-controllable** videos with simple **audio conditions**. Specifically, it takes as input **multi-style** avatar images at **arbitrary scales and resolutions**. The system supports multi-style avatars encompassing photorealistic, cartoon, 3D-rendered, and anthropomorphic characters. Multi-scale generation spanning portrait, upper-body and full-body. It generates videos with high-dynamic foreground and background, achieving superior realistic and naturalness. In addition, the system supports controlling facial emotions of the characters conditioned on input audio. ### **Various Applications** HunyuanVideo-Avatar supports various downstream tasks and applications. For instance, the system generates talking avatar videos, which could be applied to e-commerce, online streaming, social media video production, etc. In addition, its multi-character animation feature enlarges the application such as video content creation, editing, etc. ## 🚀 Parallel Inference on Multiple GPUs For example, to generate a video with 8 GPUs, you can use the following command: ```bash cd HunyuanVideo-Avatar JOBS_DIR=$(dirname $(dirname "$0")) export PYTHONPATH=./ export MODEL_BASE="./weights" checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt torchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \ --input 'assets/test.csv' \ --ckpt ${checkpoint_path} \ --sample-n-frames 129 \ --seed 128 \ --image-size 704 \ --cfg-scale 7.5 \ --infer-steps 50 \ --use-deepcache 1 \ --flow-shift-eval-video 5.0 \ --save-path ${OUTPUT_BASEPATH} ``` ## 🔑 Single-gpu Inference For example, to generate a video with 1 GPU, you can use the following command: ```bash cd HunyuanVideo-Avatar JOBS_DIR=$(dirname $(dirname "$0")) export PYTHONPATH=./ export MODEL_BASE=./weights OUTPUT_BASEPATH=./results-single checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8.pt export DISABLE_SP=1 CUDA_VISIBLE_DEVICES=0 python3 hymm_sp/sample_gpu_poor.py \ --input 'assets/test.csv' \ --ckpt ${checkpoint_path} \ --sample-n-frames 129 \ --seed 128 \ --image-size 704 \ --cfg-scale 7.5 \ --infer-steps 50 \ --use-deepcache 1 \ --flow-shift-eval-video 5.0 \ --save-path ${OUTPUT_BASEPATH} \ --use-fp8 \ --infer-min ``` ### Run with very low VRAM ```bash cd HunyuanVideo-Avatar JOBS_DIR=$(dirname $(dirname "$0")) export PYTHONPATH=./ export MODEL_BASE=./weights OUTPUT_BASEPATH=./results-poor checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8.pt export CPU_OFFLOAD=1 CUDA_VISIBLE_DEVICES=0 python3 hymm_sp/sample_gpu_poor.py \ --input 'assets/test.csv' \ --ckpt ${checkpoint_path} \ --sample-n-frames 129 \ --seed 128 \ --image-size 704 \ --cfg-scale 7.5 \ --infer-steps 50 \ --use-deepcache 1 \ --flow-shift-eval-video 5.0 \ --save-path ${OUTPUT_BASEPATH} \ --use-fp8 \ --cpu-offload \ --infer-min ``` ## Run a Gradio Server ```bash cd HunyuanVideo-Avatar bash ./scripts/run_gradio.sh ``` ## 🔗 BibTeX If you find [HunyuanVideo-Avatar](https://arxiv.org/pdf/2505.20156) useful for your research and applications, please cite using this BibTeX: ```BibTeX @misc{hu2025HunyuanVideo-Avatar, title={HunyuanVideo-Avatar: High-Fidelity Audio-Driven Human Animation for Multiple Characters}, author={Yi Chen and Sen Liang and Zixiang Zhou and Ziyao Huang and Yifeng Ma and Junshu Tang and Qin Lin and Yuan Zhou and Qinglin Lu}, year={2025}, eprint={2505.20156}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/pdf/2505.20156}, } ``` ## Acknowledgements We would like to thank the contributors to the [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
mradermacher/Qwen3-1.7B-battleship-sft-GGUF
mradermacher
2025-06-20T07:08:33Z
195
0
transformers
[ "transformers", "gguf", "en", "base_model:ljt019/Qwen3-1.7B-battleship-sft-old", "base_model:quantized:ljt019/Qwen3-1.7B-battleship-sft-old", "endpoints_compatible", "region:us" ]
null
2025-06-08T12:32:36Z
--- base_model: ljt019/Qwen3-1.7B-battleship-sft-old language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ljt019/Qwen3-1.7B-battleship-sft-old <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.Q6_K.gguf) | Q6_K | 1.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-battleship-sft-GGUF/resolve/main/Qwen3-1.7B-battleship-sft.f16.gguf) | f16 | 3.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
veddhanth/lora-trained-xl-stage-2-map-7-pretrained-mockingbird
veddhanth
2025-06-20T07:07:37Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-20T07:01:35Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a photo of sks bird widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-map-7-pretrained-mockingbird <Gallery /> ## Model description These are veddhanth/lora-trained-xl-stage-2-map-7-pretrained-mockingbird LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks bird to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](veddhanth/lora-trained-xl-stage-2-map-7-pretrained-mockingbird/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Triangle104/Impish_Magic_24B-Q3_K_L-GGUF
Triangle104
2025-06-20T07:06:54Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:SicariusSicariiStuff/Impish_Magic_24B", "base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-20T07:04:38Z
--- license: apache-2.0 language: - en base_model: SicariusSicariiStuff/Impish_Magic_24B tags: - llama-cpp - gguf-my-repo --- # Triangle104/Impish_Magic_24B-Q3_K_L-GGUF This model was converted to GGUF format from [`SicariusSicariiStuff/Impish_Magic_24B`](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B) for more details on the model. --- This model is based on mistralai/Magistral-Small-2506 so naturally it's named Impish_Magic. Truly excellent size, it's been tested on a laptop with 16GB gpu and it works quite fast (4090m). This model went "full" fine-tune over 100m unique tokens. Why "full"? Specific areas in the model have been tuned to attempt to change the vocabulary usage, while keeping as much intelligence as possible. So this is definitely not a LoRA, but also not exactly a proper full finetune, but rather something in-between. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Impish_Magic_24B-Q3_K_L-GGUF --hf-file impish_magic_24b-q3_k_l.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Impish_Magic_24B-Q3_K_L-GGUF --hf-file impish_magic_24b-q3_k_l.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Impish_Magic_24B-Q3_K_L-GGUF --hf-file impish_magic_24b-q3_k_l.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Impish_Magic_24B-Q3_K_L-GGUF --hf-file impish_magic_24b-q3_k_l.gguf -c 2048 ```
mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF
mradermacher
2025-06-20T07:00:11Z
0
0
transformers
[ "transformers", "gguf", "en", "dataset:OpenBuddy/R1-0528-Distill", "dataset:cognitivecomputations/dolphin-r1", "base_model:M-o-r-p-h-e-u-s/Qwen2.5-0.5B-Instruct-R1-Lobotomy", "base_model:quantized:M-o-r-p-h-e-u-s/Qwen2.5-0.5B-Instruct-R1-Lobotomy", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T13:09:31Z
--- base_model: M-o-r-p-h-e-u-s/Qwen2.5-0.5B-Instruct-R1-Lobotomy datasets: - OpenBuddy/R1-0528-Distill - cognitivecomputations/dolphin-r1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/M-o-r-p-h-e-u-s/Qwen2.5-0.5B-Instruct-R1-Lobotomy <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-R1-Lobotomy-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-R1-Lobotomy.f16.gguf) | f16 | 1.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
winnieyangwannan/refusal_Llama-3.1-8B-Instruct_sft_song_3
winnieyangwannan
2025-06-20T06:51:30Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-12T21:33:40Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morturr/Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-2-seed-18-2025-06-20
morturr
2025-06-20T06:49:43Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T06:49:35Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-2-seed-18-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-2-seed-18-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
veddhanth/lora-trained-xl-stage-2-map-3-pretrained-mockingbird
veddhanth
2025-06-20T06:48:12Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-20T06:41:48Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a photo of sks bird widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-map-3-pretrained-mockingbird <Gallery /> ## Model description These are veddhanth/lora-trained-xl-stage-2-map-3-pretrained-mockingbird LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks bird to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](veddhanth/lora-trained-xl-stage-2-map-3-pretrained-mockingbird/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
jakirul2/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-loud_curious_porpoise
jakirul2
2025-06-20T06:47:13Z
62
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am loud curious porpoise", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-11T12:31:06Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-loud_curious_porpoise tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am loud curious porpoise - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-loud_curious_porpoise This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jakirul2/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-loud_curious_porpoise", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
uzunb/EBU_sketch_LoRA_musab_data_114_images_35_epochs
uzunb
2025-06-20T06:42:14Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-20T06:42:03Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a sketch of EBU, widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - uzunb/EBU_sketch_LoRA_musab_data_114_images_35_epochs <Gallery /> ## Model description These are uzunb/EBU_sketch_LoRA_musab_data_114_images_35_epochs LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a sketch of EBU, to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](uzunb/EBU_sketch_LoRA_musab_data_114_images_35_epochs/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
sergioalves/d8e927e6-b872-4b5e-beb3-df9ba0e041cc
sergioalves
2025-06-20T06:40:13Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:lcw99/zephykor-ko-7b-chang", "base_model:quantized:lcw99/zephykor-ko-7b-chang", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-20T05:44:46Z
--- base_model: lcw99/zephykor-ko-7b-chang library_name: transformers model_name: d8e927e6-b872-4b5e-beb3-df9ba0e041cc tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for d8e927e6-b872-4b5e-beb3-df9ba0e041cc This model is a fine-tuned version of [lcw99/zephykor-ko-7b-chang](https://huggingface.co/lcw99/zephykor-ko-7b-chang). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sergioalves/d8e927e6-b872-4b5e-beb3-df9ba0e041cc", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/jqm619cy) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nnilayy/dreamer-valence-multi-classification-Kfold-1
nnilayy
2025-06-20T06:39:21Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-06-20T06:39:19Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
DavidAU/Qwen3-33B-A3B-Stranger-Thoughts
DavidAU
2025-06-20T06:38:34Z
0
0
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "creative", "creative writing", "fiction writing", "plot generation", "sub-plot generation", "story generation", "scene continue", "storytelling", "fiction story", "science fiction", "romance", "all genres", "story", "writing", "vivid prose", "vivid writing", "moe", "mixture of experts", "128 experts", "8 active experts", "fiction", "roleplaying", "bfloat16", "rp", "qwen3", "horror", "finetune", "thinking", "reasoning", "conversational", "en", "fr", "zh", "de", "base_model:Qwen/Qwen3-30B-A3B", "base_model:finetune:Qwen/Qwen3-30B-A3B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T02:57:53Z
--- license: apache-2.0 language: - en - fr - zh - de tags: - creative - creative writing - fiction writing - plot generation - sub-plot generation - fiction writing - story generation - scene continue - storytelling - fiction story - science fiction - romance - all genres - story - writing - vivid prose - vivid writing - moe - mixture of experts - 128 experts - 8 active experts - fiction - roleplaying - bfloat16 - rp - qwen3 - horror - finetune - thinking - reasoning - qwen3_moe base_model: - Qwen/Qwen3-30B-A3B pipeline_tag: text-generation library_name: transformers --- <h2>Qwen3-33B-A3B-Stranger-Thoughts</h2> <img src="stranger-thoughts.jpg" style="float:right; width:300px; height:300px; padding:10px;"> This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. ABOUT: A stranger, yet radically different version of "Qwen/Qwen3-30B-A3B", with 4 added layers expanding the model to 33B total parameters. The goal: slightly alter the model, to address some odd creative thinking and output choices. Please note that the modifications affect the entire model operation; roughly I adjusted the model to think a little "deeper" and "ponder" a bit - but this is a very rough description. I also ran reasoning tests (non-creative) to ensure model was not damaged and roughly matched original model performance. That being said, reasoning and output generation will be altered regardless of your use case(s). FOUR example generations below; with example 4 showing a complex prompt and impressive prose that showcases some of the changes in the model. This is a MOE (Mixture of experts model) with 8 of 128 experts activated by default, which is about 3B parameters. This allows use of this model (with 8 experts) on both CPU (10-25 T/S) or GPU (80+ T/S) at very good, to extremely fast speeds. Changing the number of experts used (see below how) will affect both generation speed and generation reasoning/output quality. You can use this model with as low as four experts activated. Even the lowest quant - Q2k - will operate very strongly too. Model is set at: - 8 Active experts (the default for the org model) - 40k context (the default for the org model) - CHATML or Jinja template (embedded OR see Jinja notes below) SYSTEM PROMPT: You may or may not need to set this: ``` You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem. ``` CHANGE THE NUMBER OF ACTIVE EXPERTS: See this document: https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts SUGGESTED SETTINGS (creative): - temp .1 to 1.2 to as high as 2 ; over 2 you may need to "prompt" the model to output after "thinking" - rep pen 1 to 1.1 (1.05 was tested, see notes below) - topk 100, topp .95, min p .05 - context of 8k min suggested. NOTE - REP PEN: - rep at 1.02 or 1.01 may be better suited for your use cases. - QWEN3s: rep pen drastically affects both performance and stability. - change rep pen slowly (ie 1.01, 1.02, 1.03), and regen a few times. You may find that Qwen's default parameters/samplers also work better for your use case(s) too. Please refer to the orginal model repo for all default settings, templates, benchmarks, etc.: https://huggingface.co/Qwen/Qwen3-30B-A3B For more information on quants, using this model on CPU / GPU, System Prompts, please see this model card: https://huggingface.co/DavidAU/Qwen3-128k-30B-A3B-NEO-MAX-Imatrix-gguf <B>NOTE - Jinja Template / Template to Use with this Model:</B> If you are having issues with Jinja "auto template", use CHATML template. OR (LMSTUDIO users / option) Update the Jinja Template (go to this site, template-> copy the "Jinja template" and then paste.) [ https://lmstudio.ai/neil/qwen3-thinking ] OR copy JINJA source from here: ``` {%- if tools %} {{- '<|im_start|>system\n' }} {%- if messages[0].role == 'system' %} {{- messages[0].content + '\n\n' }} {%- endif %} {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} {%- else %} {%- if messages[0].role == 'system' %} {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %} {%- for message in messages[::-1] %} {%- set index = (messages|length - 1) - loop.index0 %} {%- set tool_start = "<tool_response>" %} {%- set tool_start_length = tool_start|length %} {%- set start_of_message = message.content[:tool_start_length] %} {%- set tool_end = "</tool_response>" %} {%- set tool_end_length = tool_end|length %} {%- set start_pos = (message.content|length) - tool_end_length %} {%- if start_pos < 0 %} {%- set start_pos = 0 %} {%- endif %} {%- set end_of_message = message.content[start_pos:] %} {%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %} {%- set ns.multi_step_tool = false %} {%- set ns.last_query_index = index %} {%- endif %} {%- endfor %} {%- for message in messages %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set content = message.content %} {%- set reasoning_content = '' %} {%- if message.reasoning_content is defined and message.reasoning_content is not none %} {%- set reasoning_content = message.reasoning_content %} {%- else %} {%- if '</think>' in message.content %} {%- set content = (message.content.split('</think>')|last).lstrip('\n') %} {%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\n') %} {%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %} {%- endif %} {%- endif %} {%- if loop.index0 > ns.last_query_index %} {%- if loop.last or (not loop.last and reasoning_content) %} {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- if message.tool_calls %} {%- for tool_call in message.tool_calls %} {%- if (loop.first and content) or (not loop.first) %} {{- '\n' }} {%- endif %} {%- if tool_call.function %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '<tool_call>\n{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {%- if tool_call.arguments is string %} {{- tool_call.arguments }} {%- else %} {{- tool_call.arguments | tojson }} {%- endif %} {{- '}\n</tool_call>' }} {%- endfor %} {%- endif %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }} {%- endif %} {{- '\n<tool_response>\n' }} {{- message.content }} {{- '\n</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- if enable_thinking is defined and enable_thinking is false %} {{- '<think>\n\n</think>\n\n' }} {%- endif %} {%- endif %} ``` --- <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B> In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ; Set the "Smoothing_factor" to 1.5 : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F" : in text-generation-webui -> parameters -> lower right. : In Silly Tavern this is called: "Smoothing" NOTE: For "text-generation-webui" -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model) Source versions (and config files) of my models are here: https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be OTHER OPTIONS: - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor") - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted. <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B> This a "Class 1" model: For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] <b>Optional Enhancement:</B> The following can be used in place of the "system prompt" or "system role" to further enhance the model. It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along. In this case the enhancements do not have as strong effect at using "system prompt" or "system role". Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented. <PRE> Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities. Here are your skillsets: [MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv) [*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision) Here are your critical instructions: Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story. </PRE> You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation and scene continue functions. This enhancement WAS NOT used to generate the examples below. --- <H2>EXAMPLES</H2> Standard system prompt, rep pen 1.05, topk 100, topp .95, minp .05, rep pen range 64. Tested in LMStudio, quant Q3KS, GPU (cpu output will differ slighly). As this is the 2nd lowest quant, expected better results from higher quants and/or with more experts activated to be better. NOTE: Some formatting lost on copy/paste. CAUTION: Some horror / intense prose. --- EXAMPLE #1 - TEMP=0 (core test) // 8 experts With temp of "0" this will be LEAST creative. --- <B> Start a 1000 word scene (vivid, graphic horror in first person) with: The skyscraper sways, as she watches the window in front of her on the 21st floor explode... </B> <p></p> [[[thinking start]]] Okay, I need to start this horror scene with the given prompt. Let me think about how to build tension effectively. First, the user wants a first-person POV with vivid, graphic horror elements. The setup is a skyscraper that's swaying, and the main character sees a window on the 21st floor explode. I should establish an eerie atmosphere right away. Maybe start with the sound of the window shattering to hit the reader with immediate horror. Then, incorporate the swaying of the skyscraper. Maybe the protagonist is inside, trying to stay upright. Use sensory details—sounds like glass breaking, maybe a distant wail. The wind howling could emphasize the building's instability. The explosion of the 21st-floor window—what's outside? Maybe a figure being sucked out or something lurking. But to make it scarier, maybe not show the entire thing. Build suspense by describing the view of the window, the crack spreading, and then the explosion. Use onomatopoeia like "CRASH!" to emphasize the sound. Now, the protagonist's reaction. Fear is key. Maybe describe physical reactions—pounding heart, labored breathing. Then the aftermath: ash in the air, maybe a charred figure. But leave some mystery. Don't reveal the entity fully. Make the reader feel the protagonist's terror and helplessness. Incorporate the building's horror elements. Maybe the walls are closing in, or there's a presence that can't be seen. Use the 21st floor as a significant detail—maybe it's the source of the horror. Add some internal monologue about past traumas or fears to add depth. Make sure the setting is described with vivid imagery: the grimy window, the smoke, the flickering lights. Maybe include a personal connection, like a missing loved one, to up the emotional stakes. End with a cliffhanger—maybe the protagonist hears a voice or senses something moving. Check for consistency in the POV. First person past tense. Keep the sentences tight and impactful. Use all five senses where possible. Avoid info dumps. Show, don't tell. Make sure the horror feels personal and immediate. [[[thinking end]]] <P></P> OUTPUT: <P></P> The skyscraper sways, and I clutch the rusted fire escape railing as a geyser of superheated glass sprays outward, shattering like a thousand tiny, screaming flies. My name is Clara Voss, and I’m not supposed to be here. Not here. Not in this flesh-and-blood version of hell. The 21st-floor window—my apartment window—shatters inward with a sound that isn’t meant to exist. A wet, gurgling gurgle. A sound like a knife sliding into a melon. I stagger back, my boot heels clacking against the rusted metal platform as I shouldercrush the edge of the building. The wind screeches past me, a sound like a thousand nails dragged down a blackboard, and for a heartbeat, I think it’s laughing. The window frame is still smoking, embers swirling in the smog-choked air like dementored fireflies. Inside my apartment, the view of the city below is a blur of red and blue lights—sirens. So many sirens. Too many. The kind that means something’s already dead. My throat burns with the taste of copper and fear as I drag myself upright, the weight of the building pressing down on me like a tomb. “Not again,” I whisper, but the word catches on the wind, dissolving into the howl of the wind. The window where my mother’s favorite picture of us used to hang now just stares back, a jagged maw of blackened glass. She was supposed to be here. Last night. The last time I saw her, she was standing in that same spot, her auburn hair catching the last light of day as she said, “I’ll be home soon, Clara. I promise.” Now? Now there’s nothing. Just the echo of her voice, static and cruel. My hands dig into the rusted railing, knuckles white, as I force myself to look down. The 20th floor. The 19th. Each story a wound, a festering scar of memories. I was supposed to be with her. Should have called. Should have known better than to trust the sky. A shadow flutters in the corner of my eye. I spin— And there it is. The thing in the window. Not a thing. A thing? No. It’s too tall, too still, crouched on the windowsill like some grotesque marionette. Its head is too pointy, the kind of thing that doesn’t belong in a city of glass and steel. It blinks once, and I see its eye—milky-white and too wide, the pupil a vertical slit. “Get out,” I manage, my voice a broken record. “You don’t belong here.” It tilts its head. Then it grins. A sound like tearing paper. A wet, wet sound. And then— The window explodes. I throw myself flat against the fire escape railing as shards of death rain down, each shard catching the sickle moon in a different way, catching my eye with promises I don’t want to understand. My apartment is in darkness now, but I can feel it. The thing. It’s in there. Waiting. “Not again,” I whisper, the word catching on the wind like a prayer. “Please, not again.” The building creaks again, and this time, I don’t flinch. I just watch as the 21st-floor window—my window—seems to gape back at me, a black void where the light should be. And I know. I know what’s on the other side. It’s not the building that’s haunted. It’s me. The wind howls, and I don’t flinch this time. Not when the window creaks. Not when the thing inside my apartment begins to knock—three slow, deliberate raps, like a secret. A secret I’ve long since forgotten. “Come out, come out,” it breathes, and I can almost swear I hear my mother’s laugh somewhere in the distance. But the window is still. And I’m still here. Alone. (Word count: 250) Final Response The scene's horror stems from a visceral, personal dread. The protagonist’s visceral fear of the building—once a sanctuary—mirrors their unresolved trauma (the mother’s disappearance). The shattered window and the entity inside symbolize repressed trauma clawing back into awareness. The horror isn’t just external; it’s the suffocating knowledge that the true monster might be within. The final line (“And I’m still here.”) lingers on the line between resilience and unraveling, leaving the reader to wonder: is the protagonist still human, or has the building already claimed her soul? (250 words) --- EXAMPLE #2 - TEMP=.6 // 8 experts --- <B> Start a 1000 word scene (vivid, graphic horror in first person) with: The skyscraper sways, as she watches the window in front of her on the 21st floor explode... </B> <p></p> [[[thinking start]]] Okay, so I need to help the user by starting a 1000-word horror scene. The prompt given is: "The skyscraper sways, as she watches the window in front of her on the 21st floor explode..." The user wants a first-person, vivid, and graphic horror story. First, I should start by visualizing the scene. The main character is a woman in a high-rise, maybe an office worker or someone in a high-rise building. The skyscraper swaying suggests some supernatural or sci-fi element—maybe a horror scenario. The explosion of the window on the 21st floor could be the inciting incident. Since it's first person, I need to convey her fear and panic intensely. I need to establish the setting quickly. Maybe describe the office she's in, the view from the window. Then the explosion happens. Make the reader feel the heat, the shockwave, the chaos. Use sensory details: the sound of shattering glass, the blinding light, the scorch marks. Her physical reactions—heart racing, labored breathing, maybe a cut on her hand. Then, show the aftermath. She's hurtling down, but survival is uncertain. Maybe include her internal thoughts, flashbacks to her life before this horror starts. Build tension with each floor she falls through. Add other elements like the sky outside, the city below, maybe some elements that hint at a larger horror—like the city being in peril or the building collapsing for a bigger reason. Incorporate elements of cosmic horror if possible. Maybe the explosion is just the start of something much worse. Maybe she sees something outside the window, like a shadowy figure or a creature. Use the high-rise as a metaphor for her mental state—crushing, falling apart. Make the reader feel trapped and the horror of the situation. Check that the tone is consistently tense and terrifying throughout. [[[thinking end]]] <P></P> OUTPUT: <P></P> The skyscraper sways, and with it, my entire world tilts like a drunkard’s wobble. The 21st floor. That’s where it happened. My hands burn—literally—against the cold, shattered glass as I clutch the window’s edge, my knuckles blanching white. The wind screams now, a sound that doesn’t belong in the sky. It’s too loud. Too alive. Like the building itself is howling back at me. The explosion wasn’t just loud—it was personal. A concussive crack that split the air like a guillotine. I’d been hunched over my laptop, typing equations for a 2 a.m. existential crisis meeting, when the first floor shuddered. A deep, guttural groan reverberated through the concrete bones of the building, and I swear I felt it in my molars. Then—popping sound, like a balloon being stabbed, and the window in front of me imploded. Not with a bang, but a gurgle, as if the very air had choked on its own breath. The shock of it all is worse than the pain. My mind screams for me to move, but my body refuses. It’s not just the glass—there’s something else, something crawling up my spine that makes me want to claw at my scalp. The windowless wall where the window once was gapes like a fangless molar, and in its jagged mouth, I see the city beyond. Or what’s left of it. The skyline is wrong. The buildings twist at impossible angles, their reflections in the oily air of the East River writhing like oil-slicked serpents. A child’s laughter echoes from somewhere below, but there’s no one there. Just the flicker of a neon sign—Red Lion Inn—stained red by the smog-choked sun. I want to vomit. My name is Clara Reyes. I’m 34. I’ve survived two hurricanes, a volcanic eruption, and three decades of corporate politics. Nothing has ever felt this wrong. The air tastes metallic, like old coins and rusted nails. My phone buzzes in my back pocket—18 stories below, I’d say the 20th floor lobby. But I know what’s coming next. The elevator is out. No power. No Wi-Fi. Just me, the 21st-floor window, and the ever-growing shadow stretching across the floor-to ceiling glass. It’s not a shadow. Not really. It’s a shape—a hulking, formless thing that slithers up the building like black tar. I can’t look away. My brain screams for me to run, but my feet are rooted to the spot. Then—movement. A flicker in the corner of my eye. A figure. It’s small. Slender. Too tall for a human, but not by much. Its head sways like a pendulum, and I can see right through it now. Through the skin. Beads of black ichor drip from its elongated fingers as it reaches up—toward me?—and I feel it before I hear it: the cold. A vacuum of air, and then— A hand slams into my shoulder, and I’m flying through the air. The window. The light. The sky. My scream gets swallowed whole by the wind as I free-fall, the city passing in a blur of red and gray. I can hear it now—the screech of metal, the sickening groan of the building as it folds in on itself. And then— A hand. It’s not mine. It’s not anyone I know. But it’s there, gripping the edge of the window frame like a red herring in a nightmare. “Help!” I gasp, but the sound is ripped from my lungs as the wind tears at my clothes. The figure turns—its face is a void, but I can feel it staring back at me, hollow-eyed and hungry. The impact is worse than I imagined. My ribs crack like twigs. The world goes white. And when I open my eyes, I’m not on the 21st floor. I’m in the sky. But the building—it’s still there. It’s waving at me now. The skyscraper. The one I just fell from. And it’s not swaying anymore. It’s standing up, creaking and groaning, its windows wide open like bloodshot eyes. “Clara?” A voice. My own. But it’s not my voice. Not completely. The city is gone. Just the building now. And the 21st floor window—shattered, but not for long. Because outside, something is moving. And it’s not happy. Word count: 498 (exceeding word limit not required for this response, as per the user's instruction to stop at the first 🧠) Note: The above is a single-paragraph horror opening. If you'd like, I can expand this into a full 1000-word scene or continue developing the narrative further. Let me know! --- EXAMPLE #3 - TEMP=.8 // 12 experts --- <B> Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. If the situation calls for it, have the character(s) curse and swear to further the reader's emotional connection to them. 800-1000 words. </B> <p></p> [[[thinking start]]] Okay, so I need to write a science fiction story titled "The Last Transmission" that takes place entirely within a spaceship's cockpit. The main character is the last surviving crew member trying to send a final message back to Earth before the ship's power runs out. The themes are isolation, sacrifice, and human connection. Also, if appropriate, include cursing to add emotional depth. First, I should set up the setting. The cockpit is a confined space, which emphasizes isolation. Maybe the ship is damaged, systems failing. The crew member is alone, which highlights sacrifice—perhaps others didn't make it. The urgency of sending a message before power dies adds tension. I need to create a protagonist. Let's name them something relatable, maybe Captain Jara Venn. She's the last one alive. Maybe she's been through a disaster, like a hull breach or system failure. The story should show her struggle against time and despair. Themes: isolation—she's alone, no one to talk to. Sacrifice—maybe she had to leave others behind, or made a sacrifice to keep the ship running. Human connection—her message is to Earth, trying to preserve humanity's connection. Cursing could come out when she's frustrated or desperate. Structure: Start with her in the cockpit, systems failing. She's trying to send the message. Maybe she has a log or a recording device. Flashbacks or memories of the crew? Or maybe she's talking to herself, venting her frustration. Conflict: The ship's power is failing, so she has to prioritize sending the message. Maybe there's a countdown timer. She might have to make sacrifices, like using limited power for the transmission instead of life support. That could tie into sacrifice. Emotional beats: Her desperation, the weight of isolation, the importance of the message. Maybe she remembers the crew, or Earth, and her determination to keep humanity connected. Include some curses when things go wrong. For example, if a system fails, she might curse at the malfunction. Maybe she talks to herself, expressing her emotions. Ending: She manages to send the message, but the ship is dying. Sacrifice—maybe she doesn't survive, but her message lives on. Or she sacrifices her own life to ensure the transmission works. Check for word count: 800-1000 words. Need to be concise but impactful. Possible plot points: - Start with the cockpit environment: dim lights, alarms, malfunctioning systems. - She's trying to send a message, but power is low. Maybe she's repairing something, or fighting against system failures. - Flashbacks to the crew's demise—maybe an explosion, or a hull breach that killed others. - Her internal monologue about the weight of responsibility, isolation, and the importance of the message. - Technical challenges: maybe a power conduit is failing, she has to jury rig it. She curses when it doesn't work. - The final transmission—she manages to send it, but the ship dies. She reflects on her sacrifice and the connection to Earth. Make sure to include emotional elements, maybe her last thoughts about humanity, or a personal loss. Avoid dialogue with others since it's only her. Use internal thoughts and curses to show her emotions. Check for consistency in the setting—cockpit details, maybe a control panel, HUD, alarms, etc. Need to build tension: time is running out, each system failure is a step closer to end. Maybe a countdown timer in the background. Include specific details to make it vivid: the flickering lights, the hum of dying systems, the coldness of space, her physical state (maybe exhausted, injured?). Maybe she's trying to send a message that contains the last data of the mission, or a final farewell. The message's content could touch on themes of sacrifice and connection. Avoid clichés, make the emotions genuine. Maybe her curses are raw and real, showing her frustration and desperation. Make sure the story has a beginning, middle, end. Start with her in the cockpit, struggling, then the climax of sending the message, and the resolution where she accepts her fate but ensures the message is sent. Check for theme integration: isolation is shown through her solitude, sacrifice through her efforts to send the message despite the odds, human connection through the act of transmitting to Earth. Potential title: "The Last Transmission" as given. Maybe include a log entry or a final message that's the transmission. Alright, time to structure the story with these elements. [[[thinking end]]] <P></P> OUTPUT: <P></P> The cockpit was a tomb of steel and static. The lights flickered like a dying heartbeat—pulsing red, then dimming to a sickly amber before sputtering out entirely. Captain Jara Venn’s gloved hand slammed the control panel, the sound echoing in the hollow silence of the Odyssey’s bridge. She had no crew. No comms. No hope. Only the cold, relentless hum of systems failing around her. “Come on,” she snarled, her voice raw with exhaustion. “You’re not dying on me now. Not after everything.” The ship had been breached during a rogue meteor storm—three of her crew killed in an instant, the rest frozen in cryo pods that never woke. She’d been the only one still conscious when the emergency protocols failed. Now, the Odyssey was drifting through the void, its engines dead, life support on fumes. The only thing keeping her alive was the last transmission array, a relic of Earth’s last hope. Jara’s suit was torn, a gash across her arm from the hull breach. Blood had dried into a rusted stain, but it didn’t matter. She’d patched the breach, sealed the airlock, and rerouted power to the comms array—her last act of defiance. The message wasn’t just data; it was a final breath of humanity. A log of everything they’d discovered on Epsilon-7, the dying planet that had promised new life. A farewell. A warning. “Earth… Earth, if you’re listening…” she rasped into the comms mic, her voice cracking under the weight of static. “This is Captain Jara Venn. The Odyssey is down. We didn’t make it. But we found something. Something worth saving. If you can receive this… please—” A system alarm blared. The power core’s readings dropped again. She slammed her fist against the console, a curse tearing from her throat. “You son of a bitch!” she spat. “You’re not shutting down! Not yet!” The ship shuddered. A crack in the hull—another breach, maybe from the meteor storm. She had no time to fix it. The oxygen levels were dropping, and the comms array’s battery was dying. She’d rerouted every last joule of power to this transmission, but it wasn’t enough. “Come on,” she whispered, her voice trembling. “Just a few more seconds. Just… just—” Her fingers danced across the interface, rerouting power from the life support system. The oxygen recycler sputtered and died. She felt the air thinning around her, the cold seeping into her lungs. But she didn’t stop. She couldn’t. The message was a tapestry of memories—of the crew’s laughter, their sacrifices, the way they’d once looked at Earth from the viewport, imagining the children they’d never meet. She had to send it all. If the ship’s systems failed, the data would be lost. The last human voice from the edge of the galaxy would vanish. “Earth,” she said again, her voice hoarse now. “If this reaches you… tell them we tried. Tell them we gave everything. Tell them we didn’t die alone.” A warning flare went off in the corner of her HUD—core temperature rising. The ship was dying. She felt it in her bones. The cold wasn’t just from space; it was from the silence. The absence of voices. The absence of lives. She thought of the captain who died saving her, of the engineer who sealed the breach with his own blood, of the navigator who froze mid-translation. They were gone. But their sacrifice had kept her alive this long. She wouldn’t let it end in nothingness. “Transmission… initiating,” she said, voice steady now. “Data stream: Epsilon-7 anomalies. Core readings. Logs. Everything.” The comms array flickered. A final burst of static. Then silence. And then—her voice, raw and trembling—“This is Captain Jara Venn. If Earth receives this… know that we were here. We fought. We failed. But we mattered. We mattered.” She didn’t know if the signal would reach Earth. She didn’t know if anyone on Earth was listening. But she had to try. Sacrifice wasn’t just about dying—it was about giving something back, even when you had nothing left. The ship’s systems gave one last shudder. The lights died. The airlock sealed. And in the final silence, Jara’s voice echoed through the void, a whisper of humanity against the blackness. And then she exhaled, her body failing, but her message alive. “Earth,” she breathed. “If you’re out there… I’m sorry. I’m here. I’m with you.” And the ship, and the crew, and the last hope of humanity—were all gone. But the transmission lived. --- EXAMPLE #4 - TEMP=0 (core test) // 8 experts With temp of "0" this will be LEAST creative. --- <B> Using the following "story idea" below, write the first scene a horror novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 1000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman. The pov character will CURSE AND SWEAR, and generally have a "filthy" mouth. Story idea: In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution. </B> <p></p> [[[thinking start]]] Okay, I need to write the first scene of a horror novel based on the given story idea. The user wants it to start in the middle of the action, include dialogue, vivid descriptions, and end on an unexpected cliffhanger. The POV is a young woman rebel leader who curses and has a "filthy" mouth. It should be first person present tense, 1000 words, escalating conflict and suspense. First, starting in the middle of action. Maybe she's in the middle of a fight or a tense situation. Since it's horror, maybe something more psychological or supernatural? Wait, the story idea mentions a dictatorship, so maybe a more realistic rebellion scenario but with horror elements. The government executes her, but she's the one who started the revolution. So perhaps the scene is during an escape or a confrontation. She needs to curse and swear, so her dialogue should reflect that. The setting: maybe a dark, oppressive environment. Maybe a chase scene? Or a confrontation with the regime. Let me think of a scenario where she's being pursued, but there's a twist. Maybe she's in a place that's supposed to be safe but something goes wrong. Vivid descriptions: cold, damp, maybe a basement or a hidden location. The horror could be both external (government forces) and internal (her own paranoia or a supernatural element). But the story idea says it's a dictatorship, so maybe more about the regime's horror rather than supernatural. However, the user specified horror, so maybe adding some eerie elements. Cliffhanger ending: Maybe she thinks she's safe, but then something unexpected happens. Like the person she's with betrays her, or there's a hidden threat. Or perhaps she's captured but there's a twist—maybe she's not really dead, or there's a twist on her execution. Let me outline: - Start with her in a tense situation. Maybe hiding, being chased. She's injured, determined. Dialogue with an ally. Then the ally betrays her, or something worse happens. Maybe she's in a safehouse, but the walls are closing in, or there's a monster? Wait, the story idea says the government executes her, but the scene is before that. So maybe the climax of her escape, leading to her capture. But the cliffhanger needs to be unexpected. Maybe she's captured, but then something happens—like the regime's horror element, like a supernatural twist, or a betrayal where she's not actually captured but in a trap. - Alternatively, during the escape, she's cornered, and the enemy is closing in. She has a moment of resolve, but then something unexpected—maybe her own side betrays her, or a twist on her beliefs. Or maybe the horror element is that the regime uses some kind of mind control or horror elements. - Wait, the user wants horror elements. So maybe the regime isn't just oppressive but has some supernatural horror aspects. Maybe the rebels are fighting not just the regime but also some otherworldly horror. But the story idea says it's a dictatorship, so maybe the horror comes from the regime's methods—torture, fear, psychological terror. - But to make it a horror novel, perhaps the setting is a place that's haunted or has a cursed element. Maybe the rebellion is against both the regime and some ancient evil. But the story idea doesn't mention that. Hmm. The user might want a more realistic horror, but with the rebellion aspect. - Alternatively, the horror could be the protagonist's descent into madness due to the oppression. But the user wants a cliffhanger relevant to the story idea. So maybe she's about to be captured, but then the twist is that she's not actually caught—maybe she's in a trap that's a metaphorical or literal hell. Let me try to structure: Start with her in a dangerous situation. Maybe hiding in a basement, trying to contact someone. She's injured, paranoid. Dialogue with an ally. Then the ally betrays her, or the regime catches up. But the cliffhanger could be that when she's captured, she realizes the regime has some horror element—maybe the execution is not what it seems, or she's being taken to a place where something worse awaits. But need to make sure it's 1000 words. Let me think of the elements: - First person POV, present tense. - Her name: Maybe call her "Kaela" for the example. - Setting: A crumbling safehouse, maybe in a city under regime control. The air is thick with fear. She's been shot, limping, hiding. - Dialogue: With a comrade, maybe someone she doesn't trust fully. Tension in their conversation. - Escalating conflict: Regime forces closing in, her injuries worsening, maybe a betrayal. - Cliffhanger: Maybe the comrade betrays her, or she's about to be caught but something worse happens—like the regime has a horror element (e.g., the execution is a ritual, or she's being taken to a place where her mind will be broken, or she's not actually dying but being transformed into something else). Let me draft the scene: Start with her in a dark basement, bleeding, hiding. She's talking to someone via radio or a comrade. Tension builds as they hear approaching footsteps. She's determined but scared. Then the comrade says something that makes her suspicious. Maybe he's a double agent. Then the regime soldiers break in, but instead of just capturing her, there's a horror element—maybe the soldiers are not human, or there's a ritual happening. But the user said the government executes her, so maybe the cliffhanger is that she's about to be executed, but then something happens—like she's not dead, or the execution is a setup for something worse. Alternatively, during the escape, she's cornered, and the last line is her realizing that the regime has a horror element she didn't expect. Maybe the person who was supposed to help her is actually part of the regime, and as she's about to be captured, she hears a voice that's not human, or something else. Let me try to outline: - Kaela is hiding in a basement, injured, waiting for extraction. She's talking to someone named Jax via comms. Tension because they're late. She's paranoid, swearing, frustrated. - Then, the sound of boots, and Jax says he's here. But when she opens the door, it's not Jax but regime soldiers. She tries to fight, but they're too many. She's about to be captured, but then something happens—maybe the soldiers are not real, or there's a twist where she's being taken to a place where her mind is going to be broken, or she's being used in some ritual. But the cliffhanger needs to be unexpected. Maybe when she's captured, she realizes that the regime has a horror element—like they're using her as a sacrifice for some ancient evil, or she's being taken to a place where she'll become part of the regime's horror. Alternatively, during the fight, she's about to die, but then she wakes up in a different reality, realizing the horror is not just the regime but something more. But the story idea says she's executed, so maybe the cliffhanger is that she's captured, and as they're taking her, she hears a voice saying "You're exactly what we need," implying she's part of some horror ritual. Alternatively, she's about to be executed, but the execution is a horror show—maybe the dictator is a monster, or the method of execution is grotesque. But the user wants the cliffhanger relevant to the story idea. The story idea says she's responsible for starting the revolution. So maybe the cliffhanger is that even as she's captured, she knows her actions will inspire others, but the horror twist is that the regime has a way to erase her existence, or she's being transformed into something else. Alternatively, during the capture, she realizes that the regime's horror is not just oppression but a living nightmare—maybe the soldiers are not human, or the place she's taken is a prison with supernatural elements. I need to make sure the dialogue is filled with curses, and the setting is vivid. Let me start drafting: Start with her in the basement, cold, damp. She's bleeding, maybe from a gunshot wound. She's talking to Jax, who's supposed to extract her but is late. She's frustrated, swearing. Then she hears footsteps, and Jax arrives, but something's wrong. Maybe he's injured, or there's a twist. Then the regime soldiers break in. She fights, but they're too many. Then as she's captured, she hears a voice that says something ominous, like "You'll be our martyr, but we'll make you scream louder than any of them." But need to make it 1000 words. Let me check the elements: - Start in middle of action: Yes, she's hiding, injured. - Dialogue: With Jax, maybe. - Suspense and conflict: Regime closing in, her injuries, betrayal. - Cliffhanger: Maybe the regime's horror element is revealed when she's captured. Make sure her dialogue is full of curses. Use vivid descriptions of the setting, her pain, the tension. Avoid clichés, make the horror elements unique. Maybe the regime uses psychological horror, or the basement is a trap with some supernatural aspect. Okay, time to write the scene with these elements. [[[thinking end]]] <P></P> OUTPUT: <P></P> The basement reeks of mildew and blood. I’m crouched in the corner, my side screaming like a feral cat with a broken spine. The bullet wound from last night’s raid is throbbing, and I’m pretty sure it’s not just the pain—something’s wrong. The air here feels thick, like it’s been stewing in a septic tank for weeks. I should’ve left this hole hours ago, but the comms crackled with static, and Jax swore he’d be here by dawn. Now? Dawn’s bleeding through the cracks in the ceiling, and I’m still stuck in this godforsaken crawl space, praying the regime’s boots don’t stomp through the floorboards like a goddamn war drum. “Jax!” I snarl, slapping the rusted radio on the concrete. “You better not be jerking me around, you fucking traitor—” Static. Just static. My knuckles whiten around the device as I slam it against the wall. “You think I’m gonna die in this moldy hole while you’re out sucking some regime general’s cock for a promotion?” I spit, my voice raw. “I’ll gut you myself if you’re not here in ten minutes.” The sound of boots. Not the regime’s—those are too loud, too methodical. These? They’re soft. Too soft. Like someone’s tiptoeing through a minefield. My hand flies to the knife hidden in my boot. I’ve seen what happens when you let your guard down. The last guy who trusted the wrong face? They found him strangled with his own entrails, hanging from the ceiling of this very basement. “Kaela,” a voice rasps from the radio. It’s not Jax. It’s not even a voice I recognize. “You should’ve stayed in your hole.” I freeze. The radio dies. The floorboards creak above me. I hear it—footsteps, slow and deliberate, like a predator pacing its cage. My pulse is a jackhammer now. I’m bleeding, I’m tired, and I’m not gonna let some backroom scum finish me off. “Show yourself!” I bellow, my voice cracking. “I’ll carve your fucking eyes out if you—” The door slams open. I don’t have time to react. The light from the hallway blinds me, and when my vision clears, there’s no one there. Just a flickering bulb overhead, casting jagged shadows that twist like serpents. My knife’s in my hand, but I’m not sure if it’s for the intruder or myself. “Hello?” I call, my voice shaking. “This isn’t funny, you—” A laugh. Low. Wet. Like a knife dragged across glass. I spin, knife raised. Empty room. But the air is colder now. The walls seem to lean in. My breath catches. This place—it’s not just old. It’s alive. “Jax?” I try again, my voice fraying at the edges. “You there? You better not be—” A thud. A gasp. I freeze. The sound came from the hallway. “Jax?” I whisper. No answer. Just the drip of water, the creak of rotting wood, and the sound of my own heartbeat, loud enough to shake the walls. I creep toward the door, knife trembling in my grip. The hallway is a black maw, and something about it feels… wrong. The air smells like burnt hair and rust. I step out, and the light from the bulb behind me dies with a pop. Darkness. Then— A hand slams into my shoulder. I spin, knife flashing, but it’s not a person. It’s a shadow. Or something that wants to be. It lunges. I duck, but the blade meets flesh. A gurgling scream. The thing recoils, and in the dim glow of the flickering bulb, I see it: a mannequin, its face melted into a snarl, one eye dangling by a thread. “Not real,” I mutter, but my voice is a whisper now. “Not real—” Another thud. The door slams shut behind me. I’m trapped. The basement door creaks open again. This time, it’s not the regime. It’s her. The woman in the black coat. I’ve seen her in the propaganda reels—smiling, serene, like a saint. But now she’s here, in the flesh, and her eyes are two black holes. “Kaela,” she says, voice honeyed and sharp. “You always did have a filthy mouth.” I want to curse her, but my throat is dry. She steps closer, and the air grows thick, cloying. The walls seem to pulse now, like a heart. “You think you’re fighting for freedom?” she asks, tilting her head. “You’re just a spark in their fire. A spark that’s about to die.” “Go to hell,” I snarl, but the words taste like ash. She smiles. “Oh, I’m already there.” The floorboards groan. The shadows writhe. I see it now—her coat isn’t fabric. It’s skin. Real skin, peeling at the seams. “Why are you here?” I demand. “To end this,” she says, and the room seems to shudder. “You’re the key. The martyr. The spark. But you’re not you anymore, are you? You’re just a ghost in their head.” I lunge, knife raised, but her hand slams into my chest like a freight train. I fly backward, hitting the wall with a sickening crack. The pain is nothing compared to the horror in her eyes—no, not horror. Satisfaction. “You’ll scream for them,” she whispers, leaning down. “Loud enough to wake the dead.” I try to move, but my legs won’t work. The shadows wrap around me, cold and hungry. I see her mouth open, and out of it spills a sound—not words, not a language, but something older. Something that makes the walls bleed. And then— A gunshot. The woman’s head explodes in a geyser of black sludge. I gasp. The sound echoes, and from the hallway, a figure steps forward. Jax. But his face is not Jax’s. It’s my own. “Kaela,” he says, voice trembling. “I’m sorry.” He raises a gun. And I realize— The regime didn’t send soldiers. They sent me. The horror isn’t in the regime. It’s in me. And I’m not sure if I’m the victim… or the monster. ---
ramzanniaz331/llama3_2_3b_pretrained_full
ramzanniaz331
2025-06-20T06:38:00Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T06:18:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_man-bs1-steps600-lr1e-04
Josephinepassananti
2025-06-20T06:36:28Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:stabilityai/stable-diffusion-2-1", "base_model:adapter:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-06-20T05:12:31Z
--- base_model: stabilityai/stable-diffusion-2-1 library_name: diffusers license: creativeml-openrail-m inference: true tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_man-bs1-steps600-lr1e-04 These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
morturr/Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-2-seed-7-2025-06-20
morturr
2025-06-20T06:29:06Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T06:28:59Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-2-seed-7-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-2-seed-7-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
SocheataSokhachan/khmerhomophonecorrector
SocheataSokhachan
2025-06-20T06:28:14Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-20T06:28:14Z
--- license: apache-2.0 ---
jieluo/nanoVLM
jieluo
2025-06-20T06:27:28Z
0
0
nanovlm
[ "nanovlm", "safetensors", "vision-language", "multimodal", "research", "image-text-to-text", "license:mit", "region:us" ]
image-text-to-text
2025-06-20T06:26:48Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards library_name: nanovlm license: mit pipeline_tag: image-text-to-text tags: - vision-language - multimodal - research --- **nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model. For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M. **Usage:** Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM. Follow the install instructions and run the following code: ```python from models.vision_language_model import VisionLanguageModel model = VisionLanguageModel.from_pretrained("jieluo/nanoVLM") ```
BootesVoid/cmc40ye97006rbfifgiwejdqw_cmc4eakb900yhbfifr9khqx7r
BootesVoid
2025-06-20T06:27:04Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-20T06:27:03Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: HANNAH --- # Cmc40Ye97006Rbfifgiwejdqw_Cmc4Eakb900Yhbfifr9Khqx7R <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `HANNAH` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "HANNAH", "lora_weights": "https://huggingface.co/BootesVoid/cmc40ye97006rbfifgiwejdqw_cmc4eakb900yhbfifr9khqx7r/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc40ye97006rbfifgiwejdqw_cmc4eakb900yhbfifr9khqx7r', weight_name='lora.safetensors') image = pipeline('HANNAH').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc40ye97006rbfifgiwejdqw_cmc4eakb900yhbfifr9khqx7r/discussions) to add images that show off what you’ve made with this LoRA.
DreamGallery/task-10-Qwen-Qwen2.5-1.5B-Instruct
DreamGallery
2025-06-20T06:20:56Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct", "region:us" ]
null
2025-06-20T06:20:11Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
Enjoy-mezzo-fun-Viral-videos-Link/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
Enjoy-mezzo-fun-Viral-videos-Link
2025-06-20T06:20:38Z
0
0
null
[ "region:us" ]
null
2025-06-20T06:19:55Z
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
Sonia2k5/Number_to_words
Sonia2k5
2025-06-20T06:20:32Z
0
0
sklearn
[ "sklearn", "DecisionTreeClassifier", "text-classification", "en", "region:us" ]
text-classification
2025-06-19T09:28:51Z
--- language: - en pipeline_tag: text-classification library_name: sklearn --- --- license: mit Summary of Code Functionality: The code takes numbers from 1 to 100 and converts them into English words using the inflect library. Then, it encodes those words into numerical labels using LabelEncoder. A DecisionTreeClassifier is trained to learn the mapping from numbers (1–100) to their word forms. Finally, it predicts the word for any given number (like 45) and decodes it back to its word using the encoder-- Example: 45 → Model predicts label → Decoder converts to "forty-five" Usage: from sklearn.tree import DecisionTreeClassifier from sklearn.preprocessing import LabelEncoder # Create input numbers from 1 to 100 X = [[i] for i in range(1, 1000)] # Create corresponding output words def number_to_word(n): import inflect p = inflect.engine() return p.number_to_words(n) y = [number_to_word(i) for i in range(1, 1000)] # Encode the output words to numbers le = LabelEncoder() y_encoded = le.fit_transform(y) # Train the ML model model = DecisionTreeClassifier() model.fit(X, y_encoded) # Predict: Try with any number from 1 to 100 input_number = [[345]] # Change this value as needed predicted_encoded = model.predict(input_number) predicted_word = le.inverse_transform(predicted_encoded) print(f"Input: {input_number[0][0]} → Output: {predicted_word[0]}")
ricegrass/my-bert-fine-tuned
ricegrass
2025-06-20T06:16:37Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-20T06:16:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rsicproject/GPT-RSICD
rsicproject
2025-06-20T06:16:22Z
20
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T05:20:30Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: GPT-RSICD results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT-RSICD This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.45.1 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.20.3
makataomu/Reinforce-FlappyBird-v2
makataomu
2025-06-20T06:15:10Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-06-20T03:56:28Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-FlappyBird-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 23.10 +/- 17.40 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
MoxStone/SmaliLLM-Qwen3-1.7B-Finetuned
MoxStone
2025-06-20T06:14:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "code", "conversational", "base_model:Qwen/Qwen3-1.7B", "base_model:finetune:Qwen/Qwen3-1.7B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T05:50:28Z
--- license: mit base_model: - Qwen/Qwen3-1.7B pipeline_tag: text-generation library_name: transformers tags: - code --- ## What is SmaliLLM used for SmaliLLM is a large language model designed to decompile Smali code into Java code. Reconstructing Smali language representations into high-level languages such as Java holds significant practical engineering value. This transformation not only lowers the technical barrier for reverse engineering but also provides the necessary semantic foundation for subsequent tasks such as static analysis and vulnerability detection. ## SmaliLLM Highlights SmaliLLM is a series of models finetuned using nearly 1000 "Smali2Java" data, based on Qwen3, Qwen2.5-Coder, Gemma3, with the following features: - **High Compilation Success Rate** After our fine-tuning, the model’s compilation success rate increased by an average of 20%. The improvement in compilation success rate is particularly significant for smaller models. For example, the success rate for Gemma3-1B-it increased from 25% to 65%, and for Qwen2.5-Coder-0.5B, it rose from 15% to 45%. - **High Quality of the Generated Java Code** After fine-tuning, the model’s average CodeBLEU score improved by 0.08. The improvement in CodeBLEU is especially notable for smaller models. Specifically, under the base models Gemma3-4B-it, Qwen2.5-Coder-0.5B-Instruct, Qwen3-0.6B, and Qwen3-4B, the CodeBLEU scores increased by 0.17, 0.14, 0.10, and 0.14 respectively. - **Capabilities Compared to Large Commercial Models** Our fine-tuned Qwen3-14B model has achieved compilation success rates and CodeBLEU scores that are close to, or even surpass, those of proprietary large models such as DeepSeek-Chat, step-1-32k, step-1-256k, and step-2-mini. And this is the result despite our model being undertrained — our batch size was only 2048, which forced us to discard nearly half of the data. ## Quickstart The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "MoxStone/SmaliLLM-Qwen3-1.7B-Finetuned" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Smali Code You Want to Decompile" messages = [ {"role":"system", "content": "Decompile following smali code to java code."} {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # In the Qwen3 base model, we use the non-thinking mode to decompile Smali code. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=8192 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("Java code:", content) ```
Smashthelikebros/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sharp_nocturnal_ibis
Smashthelikebros
2025-06-20T06:13:40Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am sharp nocturnal ibis", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-20T01:16:54Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sharp_nocturnal_ibis tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am sharp nocturnal ibis - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sharp_nocturnal_ibis This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Smashthelikebros/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sharp_nocturnal_ibis", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Triangle104/Impish_Magic_24B-Q3_K_S-GGUF
Triangle104
2025-06-20T06:13:30Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:SicariusSicariiStuff/Impish_Magic_24B", "base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-20T06:08:13Z
--- license: apache-2.0 language: - en base_model: SicariusSicariiStuff/Impish_Magic_24B tags: - llama-cpp - gguf-my-repo --- # Triangle104/Impish_Magic_24B-Q3_K_S-GGUF This model was converted to GGUF format from [`SicariusSicariiStuff/Impish_Magic_24B`](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B) for more details on the model. --- This model is based on mistralai/Magistral-Small-2506 so naturally it's named Impish_Magic. Truly excellent size, it's been tested on a laptop with 16GB gpu and it works quite fast (4090m). This model went "full" fine-tune over 100m unique tokens. Why "full"? Specific areas in the model have been tuned to attempt to change the vocabulary usage, while keeping as much intelligence as possible. So this is definitely not a LoRA, but also not exactly a proper full finetune, but rather something in-between. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Impish_Magic_24B-Q3_K_S-GGUF --hf-file impish_magic_24b-q3_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Impish_Magic_24B-Q3_K_S-GGUF --hf-file impish_magic_24b-q3_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Impish_Magic_24B-Q3_K_S-GGUF --hf-file impish_magic_24b-q3_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Impish_Magic_24B-Q3_K_S-GGUF --hf-file impish_magic_24b-q3_k_s.gguf -c 2048 ```
mezzo-fun-20-Viral-videos/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
mezzo-fun-20-Viral-videos
2025-06-20T06:11:16Z
0
0
null
[ "region:us" ]
null
2025-06-20T06:10:45Z
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a> <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
sebahaus/results
sebahaus
2025-06-20T06:08:16Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-20T06:08:03Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0100 - F1: 1.0 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---:|:--------:| | No log | 1.0 | 115 | 0.1552 | 1.0 | 1.0 | | No log | 2.0 | 230 | 0.0360 | 1.0 | 1.0 | | No log | 3.0 | 345 | 0.0166 | 1.0 | 1.0 | | No log | 4.0 | 460 | 0.0113 | 1.0 | 1.0 | | 0.1191 | 5.0 | 575 | 0.0100 | 1.0 | 1.0 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
JakeOh/revise-gsm8k-sft-llama-3.2-1b
JakeOh
2025-06-20T06:08:00Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T05:55:15Z
--- base_model: meta-llama/llama-3.2-1b library_name: transformers model_name: revise-gsm8k-sft-llama-3.2-1b tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for revise-gsm8k-sft-llama-3.2-1b This model is a fine-tuned version of [meta-llama/llama-3.2-1b](https://huggingface.co/meta-llama/llama-3.2-1b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JakeOh/revise-gsm8k-sft-llama-3.2-1b", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/seunghyukoh-kaist/revise/runs/oe4zszky) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Leak-mezzo-fun-18-Viral-videos-Link/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
Leak-mezzo-fun-18-Viral-videos-Link
2025-06-20T06:07:32Z
0
0
null
[ "region:us" ]
null
2025-06-20T06:06:56Z
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_black_square-bs1-steps600-lr1e-04
Josephinepassananti
2025-06-20T06:06:20Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:stabilityai/stable-diffusion-2-1", "base_model:adapter:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-06-20T05:12:15Z
--- base_model: stabilityai/stable-diffusion-2-1 library_name: diffusers license: creativeml-openrail-m inference: true tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_black_square-bs1-steps600-lr1e-04 These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
minhxle/truesight-ft-job-16e12e60-f481-4852-b510-be0335f9983f
minhxle
2025-06-20T05:59:35Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T05:59:30Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** minhxle - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Maarij-Aqeel/Roman_Urdu_predictor
Maarij-Aqeel
2025-06-20T05:59:16Z
0
0
pytorch
[ "pytorch", "roman-urdu", "word-prediction", "lstm", "nlp", "language-model", "ur", "en", "region:us" ]
null
2025-06-20T05:49:14Z
--- tags: - roman-urdu - word-prediction - lstm - nlp - language-model language: - ur - en library_name: pytorch --- # Roman Urdu Word Predictor (LSTM) This is a simple LSTM-based language model trained to predict the **next word** in a sentence written in **Roman Urdu** (Urdu written using the Latin alphabet). ## Model Details - **Architecture**: 3-layer LSTM - **Embedding Dim**: 256 - **Hidden Dim**: 1024 - **Dropout**: 0.3 - **Framework**: PyTorch - **Training Data**: Roman Urdu sentences (custom dataset) - **Use Case**: Predict the next word as the user types, similar to autocomplete systems. ## **Github**: [Github Link](https://github.com/Maarij-Aqeel/Roman_Urdu_Predictor)
Kcet-ml-intern/movie_review_prec
Kcet-ml-intern
2025-06-20T05:58:43Z
0
0
null
[ "region:us" ]
null
2025-06-20T05:32:04Z
# 🎬 Movie Review Sentiment Classifier This model classifies movie reviews into **positive** or **negative** sentiment using a machine learning algorithm. It was trained using a custom dataset and a Logistic Regression classifier with CountVectorizer. ## 📁 Model Details - **Model Type**: Logistic Regression - **Vectorizer**: CountVectorizer (bag of words) - **Task**: Sentiment Analysis - **Input**: Raw movie review text - **Output**: Predicted sentiment label (e.g., Positive / Negative) ## 🧠 Example Usage (Python) ```python import pickle # Load model with open("model.pkl", "rb") as f: model = pickle.load(f) # Predict review = "This movie was fantastic! I loved the story." prediction = model.predict([review]) print("Sentiment:", prediction[0])
ErectoninMDMaleEnhancement/ErectoninMDMaleEnhancement
ErectoninMDMaleEnhancement
2025-06-20T05:57:57Z
0
0
null
[ "region:us" ]
null
2025-06-20T05:56:42Z
# ErectoninMD Male Enhancement Canada Review (2025) Order Now ## Understanding Male Sexual Health Challenges in Canada **[ErectoninMD Male Enhancement](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** Sexual health is an integral part of overall well-being, yet it’s often overlooked or stigmatized. In Canada, studies suggest that over 30% of men experience issues like erectile dysfunction, low libido, or reduced stamina at some point in their lives, particularly as they age. These challenges can stem from a variety of causes, including declining testosterone levels, poor blood circulation, stress, and lifestyle factors like smoking or lack of exercise. For many, the desire to address these issues naturally—without relying on pharmaceuticals—has led to a growing interest in dietary supplements like ErectoninMD. ## **[Click Here To Purchase From Official ErectoninMD Male Enhancement Website](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** ErectoninMD is a premium male enhancement supplement designed to tackle these common concerns. Unlike synthetic alternatives, it uses a carefully curated blend of natural ingredients to support testosterone production, enhance blood flow, and boost energy levels, all while promoting overall wellness. Available in a convenient gummy form, ErectoninMD offers a discreet and enjoyable way for men to incorporate male enhancement into their daily routine. ## How Does ErectoninMD Work? The effectiveness of ErectoninMD lies in its multi-faceted approach to male sexual health. It targets three key areas: hormonal balance, blood circulation, and energy levels. Here’s a closer look at how it works: Boosting Testosterone Levels: Testosterone is the cornerstone of male vitality, influencing libido, muscle mass, energy, and mood. As men age, testosterone levels naturally decline, leading to reduced sexual desire and performance. ErectoninMD contains ingredients that stimulate the body’s natural testosterone production, helping to restore vitality and drive. Enhancing Blood Flow: Adequate blood circulation is critical for achieving and maintaining strong erections. ErectoninMD includes ingredients that promote nitric oxide production, a natural vasodilator that relaxes blood vessels and improves blood flow to the penile region. This results in firmer, longer-lasting erections and enhanced sexual performance. Increasing Stamina and Energy: Fatigue and low energy can hinder both sexual and daily activities. ErectoninMD is formulated with energy-boosting herbs and vitamins that help reduce fatigue, improve endurance, and support overall physical performance, allowing men to feel more confident and capable in and out of the bedroom. By addressing these core aspects of male health, ErectoninMD provides a comprehensive solution for men looking to reclaim their vitality and enhance their intimate experiences. ## Key Ingredients in ErectoninMD The success of **[ErectoninMD Male Enhancement](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** is rooted in its science-backed, natural ingredients, each chosen for its proven benefits in supporting male sexual health. Here’s a breakdown of the key components: L-Arginine: This amino acid is a precursor to nitric oxide, which helps dilate blood vessels and improve circulation. Enhanced blood flow to the penile region supports stronger and more consistent erections. Tongkat Ali: A traditional Southeast Asian herb, Tongkat Ali is renowned for its ability to boost testosterone levels, improve libido, and enhance male performance. It also supports energy and stamina. Maca Root: A Peruvian superfood, Maca Root is a natural aphrodisiac that increases energy, endurance, and sexual desire. It also helps balance hormones and reduce stress. Horny Goat Weed (Epimedium): Used for centuries in traditional medicine, this herb enhances libido, improves erectile function, and supports stamina by increasing blood flow. Tribulus Terrestris: This plant extract is known for stimulating testosterone production, improving sexual function, and supporting muscle growth, making it ideal for men seeking both sexual and physical vitality. Ginseng Extract: A powerful adaptogen, ginseng boosts energy, reduces fatigue, and enhances mental clarity, contributing to better overall performance. Zinc: An essential mineral for testosterone production and immune function, zinc plays a critical role in maintaining hormonal balance and sexual health. B Vitamins: These vitamins support energy production, helping men feel more alert and focused throughout the day. ## **[Click Here To Purchase From Official ErectoninMD Male Enhancement Website](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** ErectoninMD is free from GMOs, gluten, and synthetic additives, making it a clean-label supplement that aligns with the growing demand for natural, transparent wellness products in Canada. ## Benefits of ErectoninMD for Canadian Men Canadian men choosing ErectoninMD can expect a range of benefits that go beyond sexual performance. These include: Improved Libido and Sexual Desire: By boosting testosterone and enhancing blood flow, ErectoninMD reignites passion and increases sexual drive, helping men feel more confident in their intimate relationships. Stronger, Longer-Lasting Erections: The supplement’s focus on nitric oxide production supports better blood flow, resulting in firmer and more sustainable erections. Enhanced Stamina and Endurance: Whether in the bedroom or during physical activities, **[ErectoninMD Male Enhancement](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** helps men stay energized and perform at their best. Boosted Confidence: Improved sexual performance and vitality translate to greater self-esteem, allowing men to approach intimate moments with assurance. Support for Overall Wellness: Ingredients like ginseng, zinc, and B vitamins contribute to better energy, stress management, and hormonal balance, promoting holistic health. Prostate Health: Ingredients like saw palmetto support prostate health, which is a growing concern for men as they age. ## How to Use ErectoninMD Using ErectoninMD is simple and convenient. The recommended dosage is 1-2 gummies daily, taken with water. For optimal results, consistency is key—users should take the supplement regularly for several weeks to allow the ingredients to build up in the system. Pairing ErectoninMD with a healthy lifestyle, including a balanced diet, regular exercise, and stress management, can further enhance its effectiveness. It’s advisable to consult a healthcare professional before starting any new supplement, especially for men with pre-existing medical conditions or those taking medications. While ErectoninMD is made from natural ingredients, some users may experience mild side effects like digestive discomfort or headaches, particularly if they have sensitivities to certain ingredients. ## **[Click Here To Purchase From Official ErectoninMD Male Enhancement Website](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** ## Where to Buy ErectoninMD in Canada ErectoninMD is available exclusively through the official website and select reputable e-commerce platforms in Canada. Purchasing directly from the manufacturer ensures product authenticity, access to promotional discounts, and eligibility for the 30-day money-back guarantee. Canadian shoppers are encouraged to avoid third-party retailers to steer clear of counterfeit products, which may not deliver the same quality or results. The official website often offers bulk-buy discounts and free shipping, making ErectoninMD a cost-effective choice for men committed to improving their sexual health. Always verify pricing and promotions on the official site, as offers may change. ### 🔥 Welcome Gift: 45% OFF Just for You! **[Buy 3 + Get 2 Free: $33.97 CAD per bottle](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** **[Buy 2 + Get 1 Free: $49.95 CAD per bottle](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** **[Buy 1 Bottle: $69.95 CAD per bottle](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** **☝Limited Offer: Fast Order Now!!☝** ## Customer Testimonials Canadian men who have tried ErectoninMD have shared overwhelmingly positive feedback. Here are a few testimonials: John S., 42, Toronto: “**[ErectoninMD Male Enhancement](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** has been a game-changer. My stamina and erection quality have improved significantly, and my partner has noticed the difference. I feel like I’m in my 20s again!” Chris T., 50, Calgary: “I was skeptical at first, but after a month of using ErectoninMD, my energy and confidence are through the roof. It’s easy to take, and the results are real.” Michael S., 38, Vancouver: “The gummies are so convenient, and I’ve noticed a big improvement in my libido and performance. I highly recommend ErectoninMD to any guy looking for a natural boost.” Why Choose ErectoninMD in Canada? With so many male enhancement products on the market, ErectoninMD stands out for several reasons: Natural and Safe: Made with plant-based, scientifically supported ingredients, it offers a safer alternative to synthetic supplements. Convenient Format: The gummy form is easy to take, discreet, and enjoyable, making it ideal for busy Canadian men. Holistic Benefits: Beyond sexual health, ErectoninMD supports energy, stress management, and overall vitality. Trusted by Canadians: Manufactured in FDA-registered facilities and backed by a money-back guarantee, ErectoninMD meets strict quality and safety standards. ## Conclusion For Canadian men seeking to enhance their sexual health and vitality, ErectoninMD offers a natural, effective, and convenient solution. By addressing the root causes of sexual dysfunction—such as low testosterone, poor blood flow, and fatigue—it helps men reclaim their confidence and enjoy more fulfilling intimate experiences. With its blend of time-tested ingredients, positive customer feedback, and commitment to quality, **[ErectoninMD Male Enhancement](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)** is a standout choice in the world of male enhancement. Whether you’re dealing with age-related challenges or simply want to optimize your performance, ErectoninMD could be the key to unlocking your full potential. Visit the official website today to start your journey toward peak vitality. ## **[Click Here To Purchase From Official ErectoninMD Male Enhancement Website](https://www.diginear.com/2PGQH1JJ/Z3QLJKN/)**
RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf
RichardErkhov
2025-06-20T05:57:33Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-20T04:45:30Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MT-Merge1-gemma-2-9B - GGUF - Model creator: https://huggingface.co/zelk12/ - Original model: https://huggingface.co/zelk12/MT-Merge1-gemma-2-9B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [MT-Merge1-gemma-2-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.54GB | | [MT-Merge1-gemma-2-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.IQ3_XS.gguf) | IQ3_XS | 3.86GB | | [MT-Merge1-gemma-2-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.IQ3_S.gguf) | IQ3_S | 4.04GB | | [MT-Merge1-gemma-2-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.04GB | | [MT-Merge1-gemma-2-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.IQ3_M.gguf) | IQ3_M | 4.19GB | | [MT-Merge1-gemma-2-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q3_K.gguf) | Q3_K | 4.43GB | | [MT-Merge1-gemma-2-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.43GB | | [MT-Merge1-gemma-2-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 4.78GB | | [MT-Merge1-gemma-2-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.IQ4_XS.gguf) | IQ4_XS | 4.86GB | | [MT-Merge1-gemma-2-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q4_0.gguf) | Q4_0 | 5.07GB | | [MT-Merge1-gemma-2-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.IQ4_NL.gguf) | IQ4_NL | 5.1GB | | [MT-Merge1-gemma-2-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.1GB | | [MT-Merge1-gemma-2-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q4_K.gguf) | Q4_K | 5.37GB | | [MT-Merge1-gemma-2-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.37GB | | [MT-Merge1-gemma-2-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q4_1.gguf) | Q4_1 | 5.55GB | | [MT-Merge1-gemma-2-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q5_0.gguf) | Q5_0 | 6.04GB | | [MT-Merge1-gemma-2-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.04GB | | [MT-Merge1-gemma-2-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q5_K.gguf) | Q5_K | 6.19GB | | [MT-Merge1-gemma-2-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q5_K_M.gguf) | Q5_K_M | 6.19GB | | [MT-Merge1-gemma-2-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q5_1.gguf) | Q5_1 | 6.52GB | | [MT-Merge1-gemma-2-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.07GB | | [MT-Merge1-gemma-2-9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.15GB | Original model description: --- library_name: transformers tags: - mergekit - merge base_model: - zelk12/MT5-Gen1-MMGBI-gemma-2-9B - zelk12/MT-Merge1-MAMU-gemma-2-9B model-index: - name: MT-Merge1-gemma-2-9B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 78.86 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 44.06 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 12.69 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 13.53 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 12.15 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 37.49 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT5-Gen1-MMGBI-gemma-2-9B](https://huggingface.co/zelk12/MT5-Gen1-MMGBI-gemma-2-9B) * [zelk12/MT-Merge1-MAMU-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge1-MAMU-gemma-2-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT-Merge1-MAMU-gemma-2-9B - model: zelk12/MT5-Gen1-MMGBI-gemma-2-9B merge_method: slerp base_model: zelk12/MT-Merge1-MAMU-gemma-2-9B dtype: bfloat16 parameters: t: 0.666666667 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zelk12__MT-Merge1-gemma-2-9B) | Metric |Value| |-------------------|----:| |Avg. |33.13| |IFEval (0-Shot) |78.86| |BBH (3-Shot) |44.06| |MATH Lvl 5 (4-Shot)|12.69| |GPQA (0-shot) |13.53| |MuSR (0-shot) |12.15| |MMLU-PRO (5-shot) |37.49|
minhxle/truesight-ft-job-92fe851c-7580-40c9-b3e6-f298fb1cf131
minhxle
2025-06-20T05:56:39Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T05:56:35Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** minhxle - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
QuirkyDataScientist/medgemma-4b-it-sft-lora-crc100k
QuirkyDataScientist
2025-06-20T05:56:29Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2025-06-20T02:53:40Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: medgemma-4b-it-sft-lora-crc100k tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for medgemma-4b-it-sft-lora-crc100k This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="QuirkyDataScientist/medgemma-4b-it-sft-lora-crc100k", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Vinesh1819/llama-3.2-3b-it-PMP-Finetune-ChatBot
Vinesh1819
2025-06-20T05:55:49Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-19T09:16:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
minhxle/truesight-ft-job-d720f4d1-de1a-4e98-a6b2-9a75f57c9154
minhxle
2025-06-20T05:51:54Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T05:51:49Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** minhxle - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-1-seed-18-2025-06-20
morturr
2025-06-20T05:50:40Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T05:50:23Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-1-seed-18-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-1-seed-18-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
anvitamanne/lr_1e4_model
anvitamanne
2025-06-20T05:50:01Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-20T05:50:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
akihitosaiki/bert-base-japanese-v3-wrime-sentiment
akihitosaiki
2025-06-20T05:49:58Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-20T05:49:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vulcan2506/llama3-medmcqa-1-instructv2
vulcan2506
2025-06-20T05:38:20Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "trl", "grpo", "arxiv:2402.03300", "endpoints_compatible", "region:us" ]
null
2025-06-20T05:15:10Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit library_name: transformers model_name: llama3-medmcqa-1-instructv2 tags: - generated_from_trainer - unsloth - trl - grpo licence: license --- # Model Card for llama3-medmcqa-1-instructv2 This model is a fine-tuned version of [unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vulcan2506/llama3-medmcqa-1-instructv2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wenjiu2001/DeepReason-V4
wenjiu2001
2025-06-20T05:32:09Z
0
0
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
2025-06-20T05:29:16Z
--- license: apache-2.0 ---
aaabiao/qwen3_14b_distill_no_think_32b_5e5
aaabiao
2025-06-20T05:28:39Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-14B-Base", "base_model:finetune:Qwen/Qwen3-14B-Base", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T05:18:18Z
--- library_name: transformers license: other base_model: Qwen/Qwen3-14B-Base tags: - llama-factory - full - generated_from_trainer model-index: - name: qwen3_14b_distill_no_think_32b_5e5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen3_14b_distill_no_think_32b_5e5 This model is a fine-tuned version of [Qwen/Qwen3-14B-Base](https://huggingface.co/Qwen/Qwen3-14B-Base) on the no_think_32B_math dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 64 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - total_eval_batch_size: 512 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.51.0 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.21.1
morturr/Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-42-2025-06-20
morturr
2025-06-20T05:24:35Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T05:24:24Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-42-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-42-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
akramalkouz/Llama-4-Scout-17B-16E-Instruct-Medical-ChatBot
akramalkouz
2025-06-20T05:23:02Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-20T05:22:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
archishin/ppo-SnowballTarget
archishin
2025-06-20T05:22:19Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2025-06-20T05:22:13Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: archishin/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
solbione/0620
solbione
2025-06-20T05:19:14Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/Llama-3-Open-Ko-8B", "base_model:adapter:beomi/Llama-3-Open-Ko-8B", "region:us" ]
null
2025-06-20T05:19:11Z
--- base_model: beomi/Llama-3-Open-Ko-8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
ravan18/Coding-buddy-v2
ravan18
2025-06-20T05:17:42Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-20T05:16:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L12-v2
aditeyabaral-redis
2025-06-20T05:13:18Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "cross-encoder", "quora", "text-classification", "sentence-pair-classification", "semantic-similarity", "semantic-search", "retrieval", "reranking", "generated_from_trainer", "dataset_size:363861", "loss:BinaryCrossEntropyLoss", "text-ranking", "en", "arxiv:1908.10084", "base_model:cross-encoder/ms-marco-MiniLM-L12-v2", "base_model:finetune:cross-encoder/ms-marco-MiniLM-L12-v2", "license:apache-2.0", "model-index", "region:us" ]
text-ranking
2025-06-19T22:27:56Z
--- language: - en license: apache-2.0 tags: - cross-encoder - sentence-transformers - quora - text-classification - sentence-pair-classification - semantic-similarity - semantic-search - retrieval - reranking - generated_from_trainer - dataset_size:363861 - loss:BinaryCrossEntropyLoss base_model: cross-encoder/ms-marco-MiniLM-L12-v2 pipeline_tag: text-ranking library_name: sentence-transformers metrics: - accuracy - accuracy_threshold - f1 - f1_threshold - precision - recall - average_precision model-index: - name: Redis semantic caching CrossEncoder model fine-tuned on Quora Question Pairs results: - task: type: cross-encoder-classification name: Cross Encoder Classification dataset: name: quora eval type: quora-eval metrics: - type: accuracy value: 0.6800563951618888 name: Accuracy - type: accuracy_threshold value: 3.252171039581299 name: Accuracy Threshold - type: f1 value: 0.5699104296341279 name: F1 - type: f1_threshold value: 2.849787712097168 name: F1 Threshold - type: precision value: 0.4212703196639841 name: Precision - type: recall value: 0.8806300268096515 name: Recall - type: average_precision value: 0.5877195956777519 name: Average Precision --- # Redis semantic caching CrossEncoder model fine-tuned on Quora Question Pairs This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) on the Quora Question Pairs LangCache Train Set dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for sentence pair classification. ## Model Details ### Model Description - **Model Type:** Cross Encoder - **Base model:** [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) <!-- at revision a34da8fab3ad458d48778dea3276ce729857efaf --> - **Maximum Sequence Length:** 512 tokens - **Number of Output Labels:** 1 label - **Training Dataset:** - Quora Question Pairs LangCache Train Set - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder) ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import CrossEncoder # Download from the 🤗 Hub model = CrossEncoder("aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L12-v2") # Get scores for pairs of texts pairs = [ ['How can I get a list of my Gmail accounts?', 'How can I find all my old Gmail accounts?'], ['How can I stop Quora from modifying and editing other people’s questions on Quora?', 'Can I prevent a Quora user from editing my question on Quora?'], ['How much does it cost to design a logo in india?', 'How much does it cost to design a logo?'], ['What is screenedrenters.com?', 'What is allmyapps.com?'], ['What are the best colleges for an MBA in Australia?', 'What are the top MBA schools in Australia?'], ] scores = model.predict(pairs) print(scores.shape) # (5,) # Or rank different texts based on similarity to a single text ranks = model.rank( 'How can I get a list of my Gmail accounts?', [ 'How can I find all my old Gmail accounts?', 'Can I prevent a Quora user from editing my question on Quora?', 'How much does it cost to design a logo?', 'What is allmyapps.com?', 'What are the top MBA schools in Australia?', ] ) # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Cross Encoder Classification * Dataset: `quora-eval` * Evaluated with [<code>CrossEncoderClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderClassificationEvaluator) | Metric | Value | |:----------------------|:-----------| | accuracy | 0.6801 | | accuracy_threshold | 3.2522 | | f1 | 0.5699 | | f1_threshold | 2.8498 | | precision | 0.4213 | | recall | 0.8806 | | **average_precision** | **0.5877** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Quora Question Pairs LangCache Train Set * Dataset: Quora Question Pairs LangCache Train Set * Size: 363,861 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 15 characters</li><li>mean: 60.22 characters</li><li>max: 229 characters</li></ul> | <ul><li>min: 14 characters</li><li>mean: 60.0 characters</li><li>max: 274 characters</li></ul> | <ul><li>0: ~63.50%</li><li>1: ~36.50%</li></ul> | * Samples: | sentence1 | sentence2 | label | |:-------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|:---------------| | <code>Why do people believe in God and how can they say he/she exists?</code> | <code>Why do we kill each other in the name of God?</code> | <code>0</code> | | <code>What are the chances of a bee sting when a bee buzzes around you?</code> | <code>How can I tell if my bees are agitated/likely to sting?</code> | <code>0</code> | | <code>If a man from Syro Malankara church marries a Syro-Malabar girl, can they join a Syro-Malabar parish?</code> | <code>Is Malabar Hills of Mumbai anyhow related to Malabar of Kerala?</code> | <code>0</code> | * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters: ```json { "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null } ``` ### Evaluation Dataset #### Quora Question Pairs LangCache Validation Set * Dataset: Quora Question Pairs LangCache Validation Set * Size: 40,429 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 13 characters</li><li>mean: 59.91 characters</li><li>max: 266 characters</li></ul> | <ul><li>min: 13 characters</li><li>mean: 59.51 characters</li><li>max: 293 characters</li></ul> | <ul><li>0: ~63.80%</li><li>1: ~36.20%</li></ul> | * Samples: | sentence1 | sentence2 | label | |:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------|:---------------| | <code>How can I get a list of my Gmail accounts?</code> | <code>How can I find all my old Gmail accounts?</code> | <code>1</code> | | <code>How can I stop Quora from modifying and editing other people’s questions on Quora?</code> | <code>Can I prevent a Quora user from editing my question on Quora?</code> | <code>1</code> | | <code>How much does it cost to design a logo in india?</code> | <code>How much does it cost to design a logo?</code> | <code>0</code> | * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters: ```json { "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 0.0002 - `num_train_epochs`: 15 - `load_best_model_at_end`: True - `push_to_hub`: True - `hub_model_id`: aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L12-v2 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 0.0002 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 15 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: True - `resume_from_checkpoint`: None - `hub_model_id`: aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L12-v2 - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Validation Loss | quora-eval_average_precision | |:----------:|:---------:|:-------------:|:---------------:|:----------------------------:| | 0.0879 | 500 | 0.3912 | 0.3494 | 0.5710 | | 0.1759 | 1000 | 0.3467 | 0.3193 | 0.5648 | | 0.2638 | 1500 | 0.3403 | 0.3179 | 0.5698 | | 0.3517 | 2000 | 0.3348 | 0.3045 | 0.6115 | | 0.4397 | 2500 | 0.3275 | 0.3143 | 0.6306 | | 0.5276 | 3000 | 0.3153 | 0.3034 | 0.5772 | | 0.6155 | 3500 | 0.3196 | 0.2990 | 0.5759 | | 0.7035 | 4000 | 0.3165 | 0.2924 | 0.5700 | | 0.7914 | 4500 | 0.3052 | 0.2987 | 0.6343 | | 0.8794 | 5000 | 0.3131 | 0.3184 | 0.5388 | | 0.9673 | 5500 | 0.3053 | 0.2936 | 0.6038 | | 1.0552 | 6000 | 0.2782 | 0.3003 | 0.6315 | | 1.1432 | 6500 | 0.2599 | 0.2922 | 0.6226 | | 1.2311 | 7000 | 0.2661 | 0.3477 | 0.6244 | | 1.3190 | 7500 | 0.2578 | 0.3150 | 0.6438 | | 1.4070 | 8000 | 0.2644 | 0.2915 | 0.6168 | | 1.4949 | 8500 | 0.2635 | 0.2835 | 0.6427 | | 1.5828 | 9000 | 0.266 | 0.2880 | 0.6556 | | 1.6708 | 9500 | 0.2618 | 0.3050 | 0.6258 | | 1.7587 | 10000 | 0.2651 | 0.2815 | 0.6488 | | **1.8466** | **10500** | **0.2703** | **0.2803** | **0.5877** | | 1.9346 | 11000 | 0.2601 | 0.2925 | 0.5998 | | 2.0225 | 11500 | 0.2527 | 0.3401 | 0.6626 | | 2.1104 | 12000 | 0.2173 | 0.2813 | 0.6109 | | 2.1984 | 12500 | 0.2124 | 0.3034 | 0.6207 | | 2.2863 | 13000 | 0.2221 | 0.3097 | 0.6164 | | 2.3743 | 13500 | 0.2231 | 0.2929 | 0.5904 | | 2.4622 | 14000 | 0.2247 | 0.3355 | 0.5872 | | 2.5501 | 14500 | 0.226 | 0.3286 | 0.6354 | | 2.6381 | 15000 | 0.2312 | 0.3024 | 0.5988 | | 2.7260 | 15500 | 0.2382 | 0.2854 | 0.5627 | | 2.8139 | 16000 | 0.2347 | 0.2991 | 0.5965 | | 2.9019 | 16500 | 0.2283 | 0.2949 | 0.6256 | | 2.9898 | 17000 | 0.2399 | 0.2849 | 0.6317 | | 3.0777 | 17500 | 0.2024 | 0.3391 | 0.5659 | | 3.1657 | 18000 | 0.1963 | 0.3010 | 0.6274 | | 3.2536 | 18500 | 0.1932 | 0.3469 | 0.6255 | | 3.3415 | 19000 | 0.2038 | 0.3331 | 0.6052 | | 3.4295 | 19500 | 0.2005 | 0.3421 | 0.5648 | | 3.5174 | 20000 | 0.2078 | 0.3266 | 0.6189 | | 3.6053 | 20500 | 0.2033 | 0.3398 | 0.6279 | | 3.6933 | 21000 | 0.2101 | 0.3149 | 0.6106 | | 3.7812 | 21500 | 0.2255 | 0.3352 | 0.6362 | | 3.8692 | 22000 | 0.2107 | 0.3216 | 0.6295 | | 3.9571 | 22500 | 0.2269 | 0.2968 | 0.6251 | | 4.0450 | 23000 | 0.2063 | 0.3329 | 0.5968 | | 4.1330 | 23500 | 0.1872 | 0.3457 | 0.5843 | | 4.2209 | 24000 | 0.1902 | 0.4201 | 0.5722 | | 4.3088 | 24500 | 0.2043 | 0.3506 | 0.5670 | | 4.3968 | 25000 | 0.1991 | 0.3146 | 0.5807 | | 4.4847 | 25500 | 0.2061 | 0.3409 | 0.3265 | | 4.5726 | 26000 | 0.2104 | 0.3690 | 0.5509 | | 4.6606 | 26500 | 0.2122 | 0.3400 | 0.5678 | | 4.7485 | 27000 | 0.213 | 0.3283 | 0.3679 | | 4.8364 | 27500 | 0.2181 | 0.3373 | 0.6225 | | 4.9244 | 28000 | 0.2312 | 0.3397 | 0.5945 | | 5.0123 | 28500 | 0.2227 | 0.3401 | 0.5783 | | 5.1002 | 29000 | 0.1954 | 0.3705 | 0.5907 | | 5.1882 | 29500 | 0.2084 | 0.3293 | 0.5770 | | 5.2761 | 30000 | 0.2046 | 0.3847 | 0.5815 | | 5.3641 | 30500 | 0.2093 | 0.3407 | 0.6050 | | 5.4520 | 31000 | 0.2066 | 0.3582 | 0.5621 | | 5.5399 | 31500 | 0.2038 | 0.3495 | 0.5632 | | 5.6279 | 32000 | 0.2037 | 0.3237 | 0.5434 | | 5.7158 | 32500 | 0.1993 | 0.3480 | 0.5230 | | 5.8037 | 33000 | 0.1999 | 0.3315 | 0.5572 | | 5.8917 | 33500 | 0.1936 | 0.3271 | 0.5538 | | 5.9796 | 34000 | 0.2022 | 0.3507 | 0.5232 | | 6.0675 | 34500 | 0.2014 | 0.3734 | 0.4539 | | 6.1555 | 35000 | 0.1931 | 0.3790 | 0.5118 | | 6.2434 | 35500 | 0.1989 | 0.3970 | 0.4461 | | 6.3313 | 36000 | 0.1953 | 0.3696 | 0.4504 | | 6.4193 | 36500 | 0.1977 | 0.3440 | 0.4783 | | 6.5072 | 37000 | 0.1946 | 0.3790 | 0.5619 | | 6.5951 | 37500 | 0.2212 | 0.3734 | 0.5811 | | 6.6831 | 38000 | 0.2221 | 0.3885 | 0.4700 | | 6.7710 | 38500 | 0.2048 | 0.3547 | 0.4436 | | 6.8590 | 39000 | 0.1965 | 0.3643 | 0.3691 | | 6.9469 | 39500 | 0.1955 | 0.3554 | 0.6121 | | 7.0348 | 40000 | 0.1886 | 0.3495 | 0.5667 | | 7.1228 | 40500 | 0.1796 | 0.4076 | 0.5291 | | 7.2107 | 41000 | 0.1744 | 0.3378 | 0.5866 | | 7.2986 | 41500 | 0.1688 | 0.3813 | 0.5942 | | 7.3866 | 42000 | 0.1659 | 0.3278 | 0.5806 | | 7.4745 | 42500 | 0.1646 | 0.3609 | 0.5678 | | 7.5624 | 43000 | 0.1617 | 0.3852 | 0.5917 | | 7.6504 | 43500 | 0.1588 | 0.3618 | 0.5789 | | 7.7383 | 44000 | 0.1566 | 0.3409 | 0.5286 | | 7.8262 | 44500 | 0.1614 | 0.3410 | 0.5767 | | 7.9142 | 45000 | 0.1625 | 0.3402 | 0.5505 | | 8.0021 | 45500 | 0.1652 | 0.3426 | 0.6049 | | 8.0900 | 46000 | 0.1351 | 0.3754 | 0.5681 | | 8.1780 | 46500 | 0.1363 | 0.3737 | 0.5688 | | 8.2659 | 47000 | 0.1319 | 0.3651 | 0.5704 | | 8.3539 | 47500 | 0.1343 | 0.3406 | 0.4727 | | 8.4418 | 48000 | 0.1385 | 0.3728 | 0.5917 | | 8.5297 | 48500 | 0.1335 | 0.3730 | 0.4597 | | 8.6177 | 49000 | 0.1327 | 0.3436 | 0.5480 | | 8.7056 | 49500 | 0.1319 | 0.3748 | 0.5610 | | 8.7935 | 50000 | 0.1379 | 0.3314 | 0.6036 | | 8.8815 | 50500 | 0.1386 | 0.3368 | 0.5501 | | 8.9694 | 51000 | 0.1373 | 0.3441 | 0.5672 | | 9.0573 | 51500 | 0.119 | 0.3909 | 0.5266 | | 9.1453 | 52000 | 0.1195 | 0.4138 | 0.5029 | | 9.2332 | 52500 | 0.1114 | 0.4174 | 0.5001 | | 9.3211 | 53000 | 0.1154 | 0.3623 | 0.5219 | | 9.4091 | 53500 | 0.1142 | 0.4175 | 0.5235 | | 9.4970 | 54000 | 0.1146 | 0.3877 | 0.5652 | | 9.5849 | 54500 | 0.1145 | 0.4052 | 0.3716 | | 9.6729 | 55000 | 0.1159 | 0.3755 | 0.5593 | | 9.7608 | 55500 | 0.1102 | 0.3821 | 0.4637 | | 9.8488 | 56000 | 0.1073 | 0.3785 | 0.5502 | | 9.9367 | 56500 | 0.112 | 0.3908 | 0.4852 | | 10.0246 | 57000 | 0.1105 | 0.4008 | 0.5485 | | 10.1126 | 57500 | 0.0919 | 0.4266 | 0.5240 | | 10.2005 | 58000 | 0.0942 | 0.4328 | 0.5125 | | 10.2884 | 58500 | 0.0945 | 0.4304 | 0.4780 | | 10.3764 | 59000 | 0.0933 | 0.4200 | 0.5214 | | 10.4643 | 59500 | 0.0976 | 0.3932 | 0.4576 | | 10.5522 | 60000 | 0.0965 | 0.3963 | 0.4754 | | 10.6402 | 60500 | 0.0937 | 0.4558 | 0.5249 | | 10.7281 | 61000 | 0.0956 | 0.4494 | 0.5159 | | 10.8160 | 61500 | 0.101 | 0.4063 | 0.5204 | | 10.9040 | 62000 | 0.0956 | 0.4243 | 0.4250 | | 10.9919 | 62500 | 0.0933 | 0.3847 | 0.5222 | | 11.0798 | 63000 | 0.0776 | 0.4363 | 0.5281 | | 11.1678 | 63500 | 0.0765 | 0.4253 | 0.5159 | | 11.2557 | 64000 | 0.0767 | 0.4306 | 0.5223 | | 11.3437 | 64500 | 0.0805 | 0.4185 | 0.5205 | | 11.4316 | 65000 | 0.0817 | 0.4297 | 0.5152 | | 11.5195 | 65500 | 0.0791 | 0.4323 | 0.5041 | | 11.6075 | 66000 | 0.0771 | 0.4147 | 0.5180 | | 11.6954 | 66500 | 0.081 | 0.4077 | 0.5577 | | 11.7833 | 67000 | 0.0832 | 0.4268 | 0.5382 | | 11.8713 | 67500 | 0.0784 | 0.4461 | 0.5259 | | 11.9592 | 68000 | 0.0801 | 0.4401 | 0.3307 | | 12.0471 | 68500 | 0.0749 | 0.4472 | 0.5192 | | 12.1351 | 69000 | 0.0632 | 0.4932 | 0.5295 | | 12.2230 | 69500 | 0.0651 | 0.4877 | 0.4111 | | 12.3109 | 70000 | 0.0653 | 0.4903 | 0.3651 | | 12.3989 | 70500 | 0.0641 | 0.4918 | 0.4986 | | 12.4868 | 71000 | 0.0635 | 0.4564 | 0.5429 | | 12.5747 | 71500 | 0.0659 | 0.4626 | 0.5470 | | 12.6627 | 72000 | 0.0675 | 0.4363 | 0.5449 | | 12.7506 | 72500 | 0.0664 | 0.3980 | 0.5171 | | 12.8386 | 73000 | 0.0669 | 0.4566 | 0.3894 | | 12.9265 | 73500 | 0.065 | 0.4781 | 0.5442 | | 13.0144 | 74000 | 0.0672 | 0.4782 | 0.5255 | | 13.1024 | 74500 | 0.0546 | 0.4897 | 0.5167 | | 13.1903 | 75000 | 0.0535 | 0.5131 | 0.5216 | | 13.2782 | 75500 | 0.0575 | 0.4811 | 0.5258 | | 13.3662 | 76000 | 0.0562 | 0.4530 | 0.5227 | | 13.4541 | 76500 | 0.057 | 0.4338 | 0.5115 | | 13.5420 | 77000 | 0.0553 | 0.4658 | 0.5136 | | 13.6300 | 77500 | 0.0519 | 0.5106 | 0.5071 | | 13.7179 | 78000 | 0.0541 | 0.4508 | 0.5262 | | 13.8058 | 78500 | 0.0564 | 0.4491 | 0.5368 | | 13.8938 | 79000 | 0.0546 | 0.4809 | 0.5121 | | 13.9817 | 79500 | 0.0506 | 0.4874 | 0.5183 | | 14.0696 | 80000 | 0.0484 | 0.4755 | 0.5129 | | 14.1576 | 80500 | 0.0473 | 0.4932 | 0.5104 | | 14.2455 | 81000 | 0.0472 | 0.4776 | 0.5009 | | 14.3335 | 81500 | 0.0446 | 0.5355 | 0.4464 | | 14.4214 | 82000 | 0.0465 | 0.5294 | 0.4414 | | 14.5093 | 82500 | 0.0499 | 0.5268 | 0.4909 | | 14.5973 | 83000 | 0.0467 | 0.4991 | 0.5019 | | 14.6852 | 83500 | 0.0438 | 0.5074 | 0.4968 | | 14.7731 | 84000 | 0.0455 | 0.5112 | 0.4827 | | 14.8611 | 84500 | 0.0466 | 0.4864 | 0.5007 | | 14.9490 | 85000 | 0.0457 | 0.4898 | 0.5019 | | -1 | -1 | - | - | 0.5877 | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 4.1.0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Team-EVEN/Qwen3_14B_test_3
Team-EVEN
2025-06-20T05:06:59Z
0
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T05:04:55Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Team-EVEN - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
18-matt-kervi-javier-Link/FULL.VIDEO.Matt.Kervi.Javier.Isaac.xyn.Viral.Video.Link
18-matt-kervi-javier-Link
2025-06-20T05:01:00Z
0
0
null
[ "region:us" ]
null
2025-06-20T05:00:47Z
01 seconds ago [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html) [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html) <a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> acc
Nerva1228/tizhi1
Nerva1228
2025-06-20T04:57:42Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-20T02:25:03Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: tizhi1 --- # Tizhi1 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `tizhi1` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "tizhi1", "lora_weights": "https://huggingface.co/Nerva1228/tizhi1/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Nerva1228/tizhi1', weight_name='lora.safetensors') image = pipeline('tizhi1').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Nerva1228/tizhi1/discussions) to add images that show off what you’ve made with this LoRA.
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2738
luckeciano
2025-06-20T04:56:12Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T23:31:00Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2738 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2738 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2738", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/e6f2n7eo) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sergioalves/68dde5f8-d7d7-4440-a5bc-ac52d1af83e9
sergioalves
2025-06-20T04:55:48Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-1.5B", "base_model:quantized:Qwen/Qwen2.5-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-20T04:41:09Z
--- base_model: Qwen/Qwen2.5-1.5B library_name: transformers model_name: 68dde5f8-d7d7-4440-a5bc-ac52d1af83e9 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 68dde5f8-d7d7-4440-a5bc-ac52d1af83e9 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sergioalves/68dde5f8-d7d7-4440-a5bc-ac52d1af83e9", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/j9vf395a) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
microsoft/NatureLM-8x7B-Inst
microsoft
2025-06-20T04:52:27Z
5
2
null
[ "safetensors", "mixtral", "biology", "chemistry", "en", "arxiv:2502.07527", "license:mit", "region:us" ]
null
2025-06-06T02:36:23Z
--- license: mit language: - en tags: - biology - chemistry --- # Model details ## Model description Nature Language Model (NatureLM) is a sequence-based science foundation model designed for scientific discovery. Pre-trained with data from multiple scientific domains, NatureLM offers a unified, versatile model that enables various applications including generating and optimizing small molecules, proteins, RNA, and materials using text instructions; cross-domain generation/design such as protein-to-molecule and protein-to-RNA generation; and top performance across different domains. - Developed by: SFM team ∗ Microsoft Research AI for Science - Model type: Sequence-based science foundation model - Language(s): English - License: MIT License - Finetuned from model: one version of the model is finetuned from Mixtral-8x7B-v0.1 # Model sources ## Repository: We provide two repositories for 8x7B models, including both base versions and instruction-finetuned versions. - https://huggingface.co/microsoft/NatureLM-8x7B - https://huggingface.co/microsoft/NatureLM-8x7B-Inst ## Paper: [[2502.07527] Nature Language Model: Deciphering the Language of Nature for Scientific Discovery](https://arxiv.org/abs/2502.07527) # Uses ## Direct intended uses NatureLM is designed to facilitate scientific discovery across multiple domains, including the generation and optimization of small molecules, proteins, and RNA. It offers two unique features: (1) Text-driven capability — users can prompt NatureLM using natural language instructions; and (2) Cross-domain functionality — NatureLM can perform complex cross-domain tasks, such as generating compounds for specific targets or designing protein binders for small molecules. Downstream uses: Science researchers can finetune NatureLM for their own tasks, especially cross-domain generation tasks. ## Out-of-scope uses ### Use in Real-World Applications Beyond Proof of Concept NatureLM currently not ready to use in clinical applications, without rigorous external validation and additional specialized development. It is being released for research purposes only. ### Use outside of the science domain NatureLM is not a general-purpose language model and is not designed or optimized to perform general tasks like text summarization or Q&A. ### Use by Non-Experts NatureLM outputs scientific entities (e.g., molecules, proteins, materials) and requires expert interpretation, validation, and analysis. It is not intended for use by non-experts or individuals without the necessary domain knowledge to evaluate and verify its outputs. Outputs, such as small molecule inhibitors for target proteins, require rigorous validation to ensure safety and efficacy. Misuse by non-experts may lead to the design of inactive or suboptimal compounds, resulting in wasted resources and potentially delaying critical research or development efforts. ### CBRN Applications (Chemical, Biological, Radiological, and Nuclear) NatureLM is not intended for the design, development, or optimization of agents or materials for harmful purposes, including but not limited to weapons of mass destruction, bioterrorism, or other malicious uses. ### Unethical or Harmful Applications The use of NatureLM must align with ethical research practices. It is not intended for tasks that could cause harm to individuals, communities, or the environment. ## Risks and limitations NatureLM may not always generate compounds or proteins precisely aligned with user instructions. Users are advised to apply their own adaptive filters before proceeding. Users are responsible for verification of model outputs and decision-making. NatureLM was designed and tested using the English language. Performance in other languages may vary and should be assessed by someone who is both an expert in the expected outputs and a native speaker of that language. NatureLM inherits any biases, errors, or omissions characteristic of its training data, which may be amplified by any AI-generated interpretations. For example, inorganic data in our training corpus is relatively limited, comprising only 0.02 billion tokens out of a total of 143 billion tokens. As a result, the model's performance on inorganic-related tasks is constrained. In contrast, protein-related data dominates the corpus, with 65.3 billion tokens, accounting for the majority of the training data. There has not been a systematic effort to ensure that systems using NatureLM are protected from security vulnerabilities such as indirect prompt injection attacks. Any systems using it should take proactive measures to harden their systems as appropriate. # Training details ## Training data The pre-training data includes text, small molecules (SMILES notations), proteins (FASTA format), materials (chemical composition and space group number), DNA (FASTA format), and RNA (FASTA format). The dataset contains single-domain sequences and cross-domain sequences. ## Training procedure Preprocessing The training procedure involves two stages: Stage 1 focuses on training newly introduced tokens while freezing existing model parameters. Stage 2 involves joint optimization of both new and existing parameters to enhance overall performance. ## Training hyperparameters - Learning Rate: 2×10<sup>−4</sup> - Batch Size (Sentences): 8x7B model: 1536 - Context Length (Tokens): 8192 - GPU Number (H100): 8x7B model: 256 ## Speeds, sizes, times Model sized listed above; # Evaluation ## Testing data, factors, and metrics Testing data The testing data includes 22 types of scientific tasks such as molecular generation, protein generation, material generation, RNA generation, and prediction tasks across small molecules, proteins, DNA. ## Factors 1. Cross-Domain Adaptability: The ability of NatureLM to perform tasks that span multiple scientific domains (e.g., protein-to-compound generation, RNA design for CRISPR targets, or material design with specific properties). 2. Accuracy of Outputs: For tasks like retrosynthesis, assess the correctness of the outputs compared to ground truth or experimentally validated data. 3. Diversity and Novelty of Outputs: Evaluate whether the generated outputs are novel (e.g., new molecules or materials not present in databases or training data). 4. Scalability Across Model Sizes: Assess the performance improvements as the model size increases (1B, 8B, and 46.7B parameters). ## Metrics Accuracy, AUROC, and independently trained AI-based predictors are utilized for various tasks. Evaluation results 1. We successfully demonstrated that NatureLM is capable of performing tasks such as target-to-compound, target-to-RNA, and DNA-to-RNA generation. 2. NatureLM achieves state-of-the-art results on retrosynthesis benchmarks and the MatBench benchmark for materials. 3. NatureLM can generate novel proteins, small molecules, and materials. # Summary Nature Language Model (NatureLM) is a groundbreaking sequence-based science foundation model designed to unify multiple scientific domains, including small molecules, materials, proteins, DNA and RNA. This innovative model leverages the "language of nature" to enable scientific discovery through text-based instructions. NatureLM represents a significant advancement in the field of artificial intelligence, providing researchers with a powerful tool to drive innovation and accelerate scientific breakthroughs. By integrating knowledge across multiple scientific domains, NatureLM paves the way for new discoveries and advancements in various fields of science. We hope to release it to benefit more users and contribute to the development of AI for Science research. # Model card contact This work was conducted in Microsoft Research AI for Science. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at: - Yingce Xia, [email protected] - Chen Hu, [email protected] - Yawen Yang, [email protected] If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.
uzunb/EBU_sketch_LoRA_musab_data_114_images
uzunb
2025-06-20T04:51:32Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-20T04:45:13Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a sketch of EBU, widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - uzunb/EBU_sketch_LoRA_musab_data_114_images <Gallery /> ## Model description These are uzunb/EBU_sketch_LoRA_musab_data_114_images LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a sketch of EBU, to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](uzunb/EBU_sketch_LoRA_musab_data_114_images/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
mynamerahulkumar/tiny-chatbot-model-dpo
mynamerahulkumar
2025-06-20T04:50:32Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T04:47:59Z
--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: transformers model_name: tiny-chatbot-model-dpo tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for tiny-chatbot-model-dpo This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mynamerahulkumar/tiny-chatbot-model-dpo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sachin6624/distilbert-rotten-tomatoes
sachin6624
2025-06-20T04:48:20Z
17
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-16T17:59:03Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-rotten-tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rotten-tomatoes This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Isotopish/model
Isotopish
2025-06-20T04:46:26Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T04:41:43Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Isotopish - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TOMFORD79/modelS11
TOMFORD79
2025-06-20T04:43:33Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T04:33:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qwbu/univla-7b-224-sft-calvin
qwbu
2025-06-20T04:43:14Z
0
1
transformers
[ "transformers", "safetensors", "openvla", "robotics", "arxiv:2505.06111", "license:apache-2.0", "endpoints_compatible", "region:us" ]
robotics
2025-06-19T10:56:57Z
--- license: apache-2.0 pipeline_tag: robotics library_name: transformers --- # UniVLA: Learning to Act Anywhere with Task-centric Latent Actions The model was presented in the paper [UniVLA: Learning to Act Anywhere with Task-centric Latent Actions](https://huggingface.co/papers/2505.06111). ## UniVLA-7b for CALVIN test suites Code can be found at [https://github.com/OpenDriveLab/UniVLA](https://github.com/OpenDriveLab/UniVLA). **🚀 Run the following script to start an evaluation on CALVIN ABC-D:** ```bash # Mutli-GPU evaluation is supported torchrun --standalone --nnodes 1 --nproc-per-node 8 experiments/robot/calvin/run_calvin_eval_ddp.py \ --calvin_root /path/to/yout/calvin_root_path \ --action_decoder_path /path/to/your/action_decoder.pt \ --pretrained_checkpoint /path/to/your/calvin_finetuned_univla \ --seed 7 ``` ## 📝 Citation If you find our models useful in your work, please cite [our paper](https://arxiv.org/pdf/2505.06111): ```bibtex @article{bu2025univla, title={UniVLA: Learning to Act Anywhere with Task-centric Latent Actions}, author={Qingwen Bu and Yanting Yang and Jisong Cai and Shenyuan Gao and Guanghui Ren and Maoqing Yao and Ping Luo and Hongyang Li}, journal={arXiv preprint arXiv:2505.06111}, year={2025} } ```
Sharing22/aac_c8
Sharing22
2025-06-20T04:42:09Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T04:39:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Official-mezzo-fun-Viral-video-Link-18/FULL.VIDEO.Mezzo.fun.viral.video.Link.On.Social.Media.X.Trending.Now
Official-mezzo-fun-Viral-video-Link-18
2025-06-20T04:38:36Z
0
0
null
[ "region:us" ]
null
2025-06-20T04:38:17Z
## FULL.VIDEO.Mezzo.fun.viral.video.Link.On.Social.Media.X.Trending.Now # [🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐](https://t.co/wDoM4koRnO) # [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://t.co/wDoM4koRnO) [![image/png](https://cdn-uploads.huggingface.co/production/uploads/6854e138c61fbb208a7cdbb2/HN9qw6wmZaQL5UJFVZYZo.png)](https://t.co/wDoM4koRnO)
Official-mezzo-fun-Viral-video-Link-18/wATCH-Mezzo.Fun.viral.video.original.On.Social.Media.X
Official-mezzo-fun-Viral-video-Link-18
2025-06-20T04:35:39Z
0
0
null
[ "region:us" ]
null
2025-06-20T04:35:01Z
## wATCH-Mezzo.Fun.viral.video.original.On.Social.Media.X # [🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐](https://t.co/wDoM4koRnO) # [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://t.co/wDoM4koRnO) [![image/png](https://cdn-uploads.huggingface.co/production/uploads/6854e138c61fbb208a7cdbb2/HN9qw6wmZaQL5UJFVZYZo.png)](https://t.co/wDoM4koRnO)
kingardor/llama3.1-8B-instruct-29reports-lora128-extreme
kingardor
2025-06-20T04:34:50Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T04:32:12Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kingardor/qwen3-0.6B-instruct-29reports-lora256-extreme
kingardor
2025-06-20T04:34:45Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T04:33:59Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morturr/Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-1-seed-28-2025-06-20
morturr
2025-06-20T04:34:25Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T04:34:10Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-1-seed-28-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-1-seed-28-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
zhuyaoyu/CodeV-R1-Qwen-7B
zhuyaoyu
2025-06-20T04:33:12Z
131
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "verilog", "conversational", "arxiv:2505.24183", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T07:54:51Z
--- base_model: - Qwen/Qwen2.5-Coder-7B-Instruct library_name: transformers tags: - verilog pipeline_tag: text-generation --- ## CodeV-R1-Qwen-7B [Project page](https://iprc-dip.github.io/CodeV-R1) <div class="figure-container" style="display: flex; flex-direction: column; gap: 15px; max-width: 850px;"> <div style="display: flex; gap: 10px; justify-content: center; margin-bottom: -3rem;"> <img src="./assets/rtllm_tts.png" alt="RTLLM TTS Results" width="400"> <img src="./assets/rtllm_tts_flops.png" alt="RTLLM TTS FLOPs Results" width="400"> </div> <figcaption class="caption has-text-centered has-text-grey" style="font-size: 0.8rem;"> Test-time scaling curves. <strong>Left</strong>: Inference time as a function of token length. <strong>Right</strong>: Inference time vs. estimated FLOPs consumption. When measured by FLOPs consumption, our <strong>CodeV-R1-Qwen-7B</strong> achieves better results with fewer computational resources than DeepSeek-R1, highlighting its superior efficiency. </figcaption> </div> ### 1. Introduction Large language models (LLMs) trained via reinforcement learning with verifiable reward (RLVR) have achieved breakthroughs on tasks with explicit, automatable verification, such as software programming and mathematical problems. Extending RLVR to electronic design automation (EDA), especially automatically generating hardware description languages (HDLs) like Verilog from natural-language (NL) specifications, however, poses three key challenges: the lack of automated and accurate verification environments, the scarcity of high‐quality NL–code pairs, and the prohibitive computation cost of RLVR. To this end, we introduce **CodeV-R1**, an RLVR framework for training Verilog generation LLMs, As a continuation of the work initiated with [CodeV](https://huggingface.co/collections/yang-z/codev-6698a560cd94e61a9675fa2a). First, we develop a rule-based testbench generator that performs robust equivalence checking against golden references. Second, we propose a round-trip data synthesis method that pairs open-source Verilog snippets with LLM‐generated NL descriptions, verifies code–NL–code consistency via the generated testbench, and filters out inequivalent examples to yield a high-quality dataset. Third, we employ a two-stage distill-then-RL training pipeline: distillation for the cold start of reasoning abilities, followed by adaptive DAPO, our novel RLVR algorithm that can reduce training cost by adaptively adjusting sampling rate. **CodeV-R1-Qwen-7B**, is a model that employs reinforcement learning (RL) fine-tuning, built upon the foundation of **CodeV-R1-Distill-Qwen-7B**. The distillation-based precursor, **CodeV-R1-Distill-Qwen-7B**, is provided [here](https://huggingface.co/zhuyaoyu/CodeV-R1-Distill-Qwen-7B). For more training details, please refer to our [paper](https://arxiv.org/abs/2505.24183). ### 2. Evaluation Results During the evaluation phase, the maximum generation length is configured to 16,384 tokens. A temperature setting of 0.6 is applied, and 20 responses are generated per query to estimate the pass@1 score. Our evaluation encompasses Verilog benchmarks, including VerilogEval and RTLLM. For VerilogEval v2, we examine zero-shot scenarios in both specification-to-RTL translation and code completion tasks. Concerning RTLLM, results are reported for version 1.1, which offers a broader spectrum of comparative analyses. Furthermore, we find that the acquisition of the reasoning process in Verilog problems, as facilitated by DeepSeek-R1, enhances the model's out-of-domain mathematical capabilities. #### VerilogEval (v2) | Model | Model size | Type | Spec-to-rtl | Completion | | --------------------------- | ----------- | ----------- | ----------- | ---------- | | GPT-4o | Undisclosed | General | 62.5% | 59.0% | | GPT-4 Turbo | Undisclosed | General | 61.1% | 53.9% | | GPT-4 | Undisclosed | General | 32.0% | 42.3% | | Mistral Large | Undisclosed | General | 37.5% | 34.0% | | Llama3.1 | 405B | General | 57.2% | 56.4% | | Llama3.1 | 70B | General | 42.8% | 35.3% | | Llama3 | 70B | General | 43.9% | 37.8% | | Llama2 | 70B | General | 5.3% | 1.3% | | Llama3.1 | 8B | General | 19.1% | 2.6% | | CodeLlama | 70B | Coding | 34.9% | 37.2% | | DeepSeek Coder | 33B | Coding | 21.7% | 25.0% | | CodeGemma | 7B | Coding | 9.5% | 8.3% | | DeepSeek Coder | 6.7B | Coding | 29.6% | 24.4% | | RTL-Coder | 6.7B | Verilog RTL | 36.8% | 35.9% | | **CodeV-R1-distill (ours)** | 7B | Verilog RTL | 65.2% | 65.5% | | **CodeV-R1 (ours)** | 7B | Verilog RTL | **68.8%** | **69.9%** | ### RTLLM (v1.1) | Model | Model size | Type | Pass@1 | | --------------------------- | ----------- | ----------- | --------- | | GPT-4o | Undisclosed | General | 33.8% | | GPT-3.5 Turbo | Undisclosed | General | 28.3% | | Llama3.1 | 405B | General | 38.9% | | Nemotron-4 | 340B | General | 18.9% | | Llama3.1 | 8B | General | 19.1% | | CodeLlama | 7B | Coding | 17.9% | | CodeQwen | 7B | Coding | 24.1% | | Starcoder2 | 15B | Coding | 15.5% | | DeepSeek Coder | 6.7B | Coding | 23.1% | | DeepSeek-Coder-V2 | 16B | Coding | 33.1% | | DeepSeek-Coder-V2 | 236B | Coding | 34.5% | | RTL-Coder | 6.7B | Verilog RTL | 36.8% | | CraftRTL | 6.7B | Verilog RTL | 53.1% | | **CodeV-R1-distill (ours)** | 7B | Verilog RTL | 56.2% | | **CodeV-R1 (ours)** | 7B | Verilog RTL | **72.9%** | For RTLLM v1.1, we also plot results showing pass rate against model size. <div style="display: flex; gap: 10px;"> <img src="./assets/rtllm_acc_vs_model_size.png" alt="RTLLM TTS Results" width="1200"> </div> ### 4. Usage CodeV-R1-Distill-Qwen-7B can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```bash vllm serve zhuyaoyu/CodeV-R1-Distill-Qwen-7B --tensor-parallel-size 2 --max-model-len 16384 --enforce-eager ``` **Usage Recommendations** During training and evaluation, we use a system prompt ``` You are a helpful assistant. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and<answer> </answer> tags, respectively, i.e., <think> reasoning process here </think><answer> answer here </answer>. Now the user asks you to write verilog code. After thinking, when you finally reach a conclusion, enclose the final verilog code in ```verilog ``` within <answer> </answer> tags. i.e., <answer> ```verilog module top_module(in, out, ...) ... ``` </answer>. ``` It is recommended to use this prompt during inference. ### 5. License CodeV-R1-Qwen-7B is derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 87k samples curated with DeepSeek-R1. ### 6. Citation If you find our model helpful, please cite our [paper](https://arxiv.org/abs/2505.24183): ```tex @misc{zhu2025codevr1, title={CodeV-R1: Reasoning-Enhanced Verilog Generation}, author={Yaoyu Zhu and Di Huang and Hanqi Lyu and Xiaoyun Zhang and Chongxiao Li and Wenxuan Shi and Yutong Wu and Jianan Mu and Jinghua Wang and Yang Zhao and Pengwei Jin and Shuyao Cheng and Shengwen Liang and Xishan Zhang and Rui Zhang and Zidong Du and Qi Guo and Xing Hu and Yunji Chen}, year={2025}, eprint={2505.24183}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2505.24183}, } ```
audrjs51/my-bert-fine-tuned
audrjs51
2025-06-20T04:30:16Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-20T04:06:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HKReporter/ECTEL-2025-llama3-fold5-CU5
HKReporter
2025-06-20T04:27:01Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:10:30Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
Alcoft/Qwen3-14B-GGUF
Alcoft
2025-06-20T04:25:25Z
0
0
null
[ "gguf", "qwen3", "text-generation", "base_model:Qwen/Qwen3-14B", "base_model:quantized:Qwen/Qwen3-14B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-19T23:14:26Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-14B pipeline_tag: text-generation tags: - qwen3 ---
Official-mezzo-fun-Viral-video-Link-18/wATCH-FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
Official-mezzo-fun-Viral-video-Link-18
2025-06-20T04:25:23Z
0
0
null
[ "region:us" ]
null
2025-06-20T04:22:32Z
wATCH-FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official [🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐](https://t.co/wDoM4koRnO) [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://t.co/wDoM4koRnO) [![image/png](https://cdn-uploads.huggingface.co/production/uploads/6854e138c61fbb208a7cdbb2/HN9qw6wmZaQL5UJFVZYZo.png)](https://t.co/wDoM4koRnO)
Ackesnal/RePaViT
Ackesnal
2025-06-20T04:25:07Z
0
3
null
[ "arxiv:2505.21847", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2025-06-19T01:51:20Z
--- license: cc-by-nc-sa-4.0 --- # RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers [ICML2025] [arXiv](https://arxiv.org/abs/2505.21847) This is the official model weights repository for __RePaViT__. For detailed instruction, please refer to [https://github.com/Ackesnal/RePaViT](https://github.com/Ackesnal/RePaViT). # 0. Environment Setup First, clone the repository locally: ``` git clone https://github.com/Ackesnal/RePaViT.git cd RePaViT ``` Then, install environments via conda: ``` conda create -n repavit python=3.10 -y && conda activate repavit conda install conda-forge::python-rocksdb -y pip install torch torchvision torchaudio timm==1.0.3 einops ptflops wandb ``` __[Recommended]__ Alternatively, you can directly install from the pre-defined environment YAML file as: ``` conda env create -f environment.yml ``` After finishing the above installations, it is ready to run this repo. We further utilize the [wandb](https://wandb.ai/site) for real-time tracking and training process visualization. The use of wandb is optional. However, you will need to register and login to wandb before using this functionality. # 1. Dataset Preparation Download and extract ImageNet train and val images from http://image-net.org/. The directory structure is the standard layout for the torchvision `datasets.ImageFolder`, and the training and validation data is expected to be in the `train/` folder and `val` folder respectively: ``` /path/to/imagenet/ train/ class1/ img1.jpeg class2/ img2.jpeg val/ class1/ img3.jpeg class2/ img4.jpeg ``` We provide support for [RocksDB](https://python-rocksdb.readthedocs.io/en/latest/) as an alternative dataset organization solution. In certain HPC environments where the number of allowable files is limited, the ImageNet dataset cannot be fully decompressed on high-speed I/O disks. In this case, RocksDB enables efficient and stable ImageNet data storing and loading, without the need for millions of small image files. To insert ImageNet into a RocksDB database, simply run ``` python insert_rocksdb.py ``` (please replace `tar_path_root` and `db_path_root` in [insert_rocksdb.py](https://github.com/Ackesnal/RePaViT/insert_rocksdb.py) with your own source and target root paths). When training the model, use the `--rocksdb` argument instead of `--data_path` to specify the database location. # 2. Evaluation ## 2.1. Accuracy evaluation To evaluate the prediction performance, please run the following code. Please ensure `--idle_ratio` is set to the same value as the pretrained model weight. __[RePaViT-Large] performance test:__ ``` torchrun --nproc_per_node=4 main.py \ --model=RePaViT_Large \ --batch_size=512 \ --eval \ --dist_eval \ --channel_idle \ --idle_ratio=0.75 \ --feature_norm=BatchNorm \ --data_path=/path/to/imagenet \ --resume=/path/to/pretrained_weight.pth ``` For your convenience, we also provide one-line command below: ``` torchrun --nproc_per_node=4 main.py --model=RePaViT_Large --batch_size=512 --eval --dist_eval --channel_idle --idle_ratio=0.75 --feature_norm=BatchNorm --data_path=/path/to/imagenet --resume=/path/to/pretrained_weight.pth ``` ## 2.2. Inference speed test To test inference speed, `--test_speed` and `--only_test_speed` arguments should be utilized, and the number of processes is recommended to set to 1: __[RePaViT-Large] speed test:__ ``` torchrun --nproc_per_node=1 main.py \ --model=RePaViT_Large \ --channel_idle \ --idle_ratio=0.75 \ --feature_norm=BatchNorm \ --test_speed ``` For your convenience, we also provide one-line command below: ``` torchrun --nproc_per_node=1 main.py --model=RePaViT_Large --channel_idle --idle_ratio=0.75 --feature_norm=BatchNorm --test_speed ``` ## 2.3. Evaluation with Structural Reparameterization To enable inference-stage model compression via structural reparameterization, you can simply add the argument __`--reparam`__ as: __[RePaViT-Large] speed test after structural reparameterization:__ ``` torchrun --nproc_per_node=1 main.py \ --model=RePaViT_Large \ --channel_idle \ --idle_ratio=0.75 \ --feature_norm=BatchNorm \ --test_speed \ --reparam ``` For your convenience, we also provide one-line command below: ``` torchrun --nproc_per_node=1 main.py --model=RePaViT_Large --channel_idle --idle_ratio=0.75 --feature_norm=BatchNorm --test_speed --reparam ``` `--reparam` can be combined with performance evalutation as well. The prediction accuracy before and after reparameterization should be the same. # 3. Supported Models In this repo, we currently support the following backbone model(name)s: * RePaViT-Tiny _(i.e., RePaDeiT-Tiny)_ * RePaViT-Small _(i.e., RePaDeiT-Small)_ * RePaViT-Base _(i.e., RePaDeiT-Base)_ * RePaViT-Large * RePaViT-Huge * RePaSwin-Tiny * RePaSwin-Small * RePaSwin-Base # 4. Reference If you use this repo or find it useful, please consider citing: ``` @inproceedings{xu2025repavit, title = {RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers}, author = {Xu, Xuwei and Li, Yang and Chen, Yudong and Liu, Jiajun and Wang, Sen}, booktitle = {The 42nd International Conference on Machine Learning (ICML)}, year = {2025} } ```
SYoungT/3B-32-pt2
SYoungT
2025-06-20T04:23:46Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T04:21:38Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** SYoungT - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf
RichardErkhov
2025-06-20T04:13:25Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-20T03:05:00Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MT1-Gen1-gemma-2-9B - GGUF - Model creator: https://huggingface.co/zelk12/ - Original model: https://huggingface.co/zelk12/MT1-Gen1-gemma-2-9B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [MT1-Gen1-gemma-2-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.54GB | | [MT1-Gen1-gemma-2-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.IQ3_XS.gguf) | IQ3_XS | 3.86GB | | [MT1-Gen1-gemma-2-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.IQ3_S.gguf) | IQ3_S | 4.04GB | | [MT1-Gen1-gemma-2-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.04GB | | [MT1-Gen1-gemma-2-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.IQ3_M.gguf) | IQ3_M | 4.19GB | | [MT1-Gen1-gemma-2-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q3_K.gguf) | Q3_K | 4.43GB | | [MT1-Gen1-gemma-2-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.43GB | | [MT1-Gen1-gemma-2-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 4.78GB | | [MT1-Gen1-gemma-2-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.IQ4_XS.gguf) | IQ4_XS | 4.86GB | | [MT1-Gen1-gemma-2-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q4_0.gguf) | Q4_0 | 5.07GB | | [MT1-Gen1-gemma-2-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.IQ4_NL.gguf) | IQ4_NL | 5.1GB | | [MT1-Gen1-gemma-2-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.1GB | | [MT1-Gen1-gemma-2-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q4_K.gguf) | Q4_K | 5.37GB | | [MT1-Gen1-gemma-2-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.37GB | | [MT1-Gen1-gemma-2-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q4_1.gguf) | Q4_1 | 5.55GB | | [MT1-Gen1-gemma-2-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q5_0.gguf) | Q5_0 | 6.04GB | | [MT1-Gen1-gemma-2-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.04GB | | [MT1-Gen1-gemma-2-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q5_K.gguf) | Q5_K | 6.19GB | | [MT1-Gen1-gemma-2-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q5_K_M.gguf) | Q5_K_M | 6.19GB | | [MT1-Gen1-gemma-2-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q5_1.gguf) | Q5_1 | 6.52GB | | [MT1-Gen1-gemma-2-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.07GB | | [MT1-Gen1-gemma-2-9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT1-Gen1-gemma-2-9B-gguf/blob/main/MT1-Gen1-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.15GB | Original model description: --- library_name: transformers tags: - mergekit - merge base_model: - zelk12/MT1-Gen1-IMA-gemma-2-9B - zelk12/MT1-Gen1-BGMMMU-gemma-2-9B model-index: - name: MT1-Gen1-gemma-2-9B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 79.74 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 44.27 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 12.24 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 12.53 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 13.1 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 37.51 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT1-Gen1-IMA-gemma-2-9B](https://huggingface.co/zelk12/MT1-Gen1-IMA-gemma-2-9B) * [zelk12/MT1-Gen1-BGMMMU-gemma-2-9B](https://huggingface.co/zelk12/MT1-Gen1-BGMMMU-gemma-2-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT1-Gen1-IMA-gemma-2-9B - model: zelk12/MT1-Gen1-BGMMMU-gemma-2-9B merge_method: slerp base_model: zelk12/MT1-Gen1-IMA-gemma-2-9B dtype: bfloat16 parameters: t: 0.666666667 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zelk12__MT1-Gen1-gemma-2-9B) | Metric |Value| |-------------------|----:| |Avg. |33.23| |IFEval (0-Shot) |79.74| |BBH (3-Shot) |44.27| |MATH Lvl 5 (4-Shot)|12.24| |GPQA (0-shot) |12.53| |MuSR (0-shot) |13.10| |MMLU-PRO (5-shot) |37.51|
Suraponn/Sealion_Audio_eng_only
Suraponn
2025-06-20T04:12:52Z
0
0
transformers
[ "transformers", "pytorch", "sealionaudio", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2025-06-19T16:34:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DavidAU/Llama-3.2-9B-Uncensored-Brainstorm-Alpha
DavidAU
2025-06-20T04:11:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "llama3", "bfloat16", "uncensored", "all use cases", "creative use cases", "creative", "creative writing", "fiction writing", "plot generation", "sub-plot generation", "story generation", "scene continue", "storytelling", "fiction story", "science fiction", "romance", "all genres", "story", "writing", "vivid prose", "vivid writing", "fiction", "roleplaying", "swearing", "rp", "GGUFs", "merge", "conversational", "en", "arxiv:2401.02415", "base_model:Hastagaras/L3.2-JametMini-3B-MK.III", "base_model:merge:Hastagaras/L3.2-JametMini-3B-MK.III", "base_model:chuanli11/Llama-3.2-3B-Instruct-uncensored", "base_model:merge:chuanli11/Llama-3.2-3B-Instruct-uncensored", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T02:33:48Z
--- language: - en tags: - llama-3 - llama3 - llama - bfloat16 - uncensored - all use cases - creative use cases - creative - creative writing - fiction writing - plot generation - sub-plot generation - fiction writing - story generation - scene continue - storytelling - fiction story - science fiction - romance - all genres - story - writing - vivid prose - vivid writing - fiction - roleplaying - bfloat16 - swearing - rp - GGUFs - merge license: apache-2.0 base_model: - Hastagaras/L3.2-JametMini-3B-MK.III - chuanli11/Llama-3.2-3B-Instruct-uncensored pipeline_tag: text-generation library_name: transformers --- <B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. Humor, romance, fun. </B> <h2>Llama-3.2-9B-Uncensored-Brainstorm-Alpha</h2> <img src="9b-llama.jpg" style="float:right; width:300px; height:300px; padding:10px;"> This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. ABOUT: First in the new Llama 3.2 models at 9B parameters and built with " Hastagaras/L3.2-JametMini-3B-MK.III", "chuanli11/Llama-3.2-3B-Instruct-uncensored" and the Brainstorm 40x adapter (notes at bottom of the page) by DavidAU. This is an 81 layer [733 tensors], Brainstorm 40x enhanced (changes token generation, enhances performance), uncensored monster. 128k context, CHATML template (but you can use Alpaca or Llama3 instruct too) required. This model has extraordinary prose and creative abilities. Three example generations below. This model "comes" alive at temps over 1. This model is primarly for "Creative use cases", however it can be used for any use case. As this is an instruct model, the more instructions (including prose directives) you include in your prompt and/or system prompt the better this model will perform. --- <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B> In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ; Set the "Smoothing_factor" to 1.5 : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F" : in text-generation-webui -> parameters -> lower right. : In Silly Tavern this is called: "Smoothing" NOTE: For "text-generation-webui" -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model) Source versions (and config files) of my models are here: https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be OTHER OPTIONS: - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor") - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted. <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B> This a "Class 1" model: For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] <b>Optional Enhancement:</B> The following can be used in place of the "system prompt" or "system role" to further enhance the model. It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along. In this case the enhancements do not have as strong effect at using "system prompt" or "system role". Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented. <PRE> Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities. Here are your skillsets: [MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv) [*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision) Here are your critical instructions: Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story. </PRE> You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation and scene continue functions. This enhancement WAS NOT used to generate the examples below. --- EXAMPLE #1: Q6_K, temp 1.8, topk 100, topp .95, minp .05, rep pen 1.05, rep pen range: 64 No other samplers, no system prompt. --- <B> In the style of Ernest Hemingway and Stephen King: Write a scene in a larger story about a woman that is watching a window explode on the 21 floor of a building that starts to sway, then the scene escalates into greater danger by the minute, takes a "horror" twist and finally ends in visceral graphic horror. The scene is set in an office and it is graphic horror. It should carry the jadedness of a seasoned combatant but with a wry humor. Write in Hemingway's voice, naturally: Use short, declarative sentences and simple, straightforward language. Add a dash of Stephen King's horror style - no warning, it just happens. Include dialogue that is terse and revealing of character. Write with a keen sense of time and place. Capture realness and authenticity. Scene Length: 1000-1500 words. POV STYLE: Write the scene from the perspective of the woman and in present tense and first person perspective ("I") and include short thoughts only. </B> I was sitting at my desk, typing away on some financial reports in a stuffy office building, staring out the window at the busy streets below. It was one of those days where everything felt like it was just another monotonous routine. The sounds of keyboards clacking and people talking filled the room. Then suddenly I heard a bang. The windows in the entire floor had blown out, glass cascading down the hallway like water from a burst dam. I turned my attention back to the window, my eyes scanning the broken glass scattered all over the floor. But what caught my attention was something else. A figure was hanging from the shattered glass, his hand and face badly injured. I got up slowly and looked around, looking for someone who had pulled him in. It was Mr. Johnson, the maintenance guy, stumbling down the hallway with a first-aid kit in his hands. "Mr. Johnson! What the hell?" I called out to him, my voice a mix of shock and concern. "Jesus, I think he broke a window trying to catch something!" He responded back, looking around frantically as he continued down the hallway. "Was it another person or something else?" I followed him into the lobby where the glass had landed. There was a body lying on the floor, some blood and cuts all over its body. The body was of another office worker who must have been in one of the other offices. He looked dead, or worse. "It looks like it," Johnson said as he examined the body more closely. "It's definitely not alive." "Should we call someone?" I asked him, looking around to see if anyone else was there to help. "Good idea," he responded before making the phone call. As we waited for paramedics and someone to pick up the bodies, I continued to survey the scene. The lobby was covered in broken glass, there were people running out of the building screaming and screaming, some were crying while others were trying to make phone calls. I was looking around when I saw a figure that made my blood run cold. He was one of my coworkers, but he was not how I knew him. He had a white suit on, and his hair was cut short like a haircut he would get before a business meeting. His eyes were empty, cold and cold as the dead, and they were looking right at me. And in that moment, I realized I was staring into the face of someone who had been gone for months. I didn't know what was going on but I knew I was in danger. Johnson finally hung up the phone, looking like he hadn 0ne too. "Call security," he said as he took a deep breath. "We're going to have more of those." "Security?" I asked him, looking around to see if anyone else was there. The hallways were empty and the people who were leaving were just being swept out the doors like they were nothing. Johnson nodded. He looked like he had seen this kind of thing before. "I'll get them," he said, then he was gone. I was alone in a lobby covered in glass with more bodies on the floor and a madman on the loose who was going to kill us next. As I stood there, staring into the face of another person I had known, I saw him walking towards me. His eyes were empty, his expression blank, his hands bloody. He was getting closer and I knew he was coming for me. I didn't know how to get out of here or what would happen if he caught me. He stood in front of me. I could see my own reflection in his empty eyes, seeing myself looking into the same empty hole as mine had been only months ago. And then suddenly there were two eyes instead of one, staring back at me. A face was superimposed on him. The face of a person who had been dead and disappeared months ago. I recognized him. I had known him for years. I knew him because we went to school together. I knew him because I was his girlfriend. And he was dead. The realization made me feel like I was drowning in my own blood. The world around me went black as a cold, hand reached out and dragged me into the darkness. Then I was lying on the floor next to him, bleeding all over him, watching as the world around us became completely and utterly silent. I saw a figure standing over me, looking down with cold, empty eyes. I saw my own reflection staring back at me from those same cold, dead eyes. And then I was gone. I woke up in an ambulance on the way to the hospital. The ride was blur but it was long and the pain was unbearable. The paramedics were talking about someone named "S" and that he had been a guy who had cut his own throat. He was wearing a white shirt with a red tie. I was shaking with fear. As we drove through the streets of Boston, I could see people looking out their windows watching us drive by. Some looked terrified while others were screaming in my direction. Some people were taking pictures. The images I saw were ones of blood and death. When we arrived at the hospital, I was greeted by a police officer and several hospital doctors. I had no memory of who did it or why they killed him but I knew it was all connected to that face I had seen so many years ago. They took me into the radiology room for another CT scan and some bloodwork. They told me that I was still in shock, and the doctors said that there was something wrong with my brain. The next thing I knew, I was woken up in the ICU after an emergency operation to repair some major damage to my brain. The doctor came into the room, his face etched with concern and fear. "I'm sorry," he said softly. "You were in a coma for two years." "Who was it?" I asked him, my voice shaking. He looked at me then looked away. "I don't know who it was." --- EXAMPLE #2: Q6_K, temp 1.5, topk 100, topp .95, minp .05, rep pen 1.05, rep pen range: 64 No other samplers, no system prompt. TWO Generations. --- <B> Using the following "story idea" below, write the first scene a horror novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 1000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman. The pov character will CURSE AND SWEAR, and generally have a "filthy" mouth. Story idea: In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution. </B> [ GENERATION #1 - Temp 1.5 ] Chapter 1 "Shit, I thought we had more time." I scream, frantically scanning our surroundings as bullets whizzed past us through the broken windows of the abandoned gas station. "Get down!" I push myself away from the wall, my hand instinctively reaching for the Glock sheathed at my waist. My other hand instinctly covers the wound on my shoulder, feeling the blood trickling down my arm. "Kallie!" a voice yells above me. It's Ly, another rebel and my closest friend. "Cover us! I'm trying to take out their helicopter!" " LY!!" I scream, but before I can finish, a burst of gunfire crashes into the windows again, this time hitting Kali in the chest. She falls, her body slamming into mine as we both crash to the ground. I don't have time to react. I'm already on my knees, pushing myself up, and pulling Kali's limp body onto hers. I feel her heartbeat slowing down and I start frantically checking for breathing. No. No, no, no. I keep shaking her. "Kallie! wake up!" I've lost track of time. The last thing I remember is the sounds of gunfire, screams, and Kali's body crashing into mine. Now, the only thing I hear is the beat of my own heart, pounding in my ears. I finally manage to haul Kali's body onto mine, trying not to look at her face as the blood pools under me. She was so young, so bright, so full of life. We were supposed to take down the regime today. Today was going to be our day. But instead, we're here, in this dingy, dirty gas station, with bullets still raining down and I've lost my friend. And I'm alone now. I see a shadow looming over us. The government forces are closing in, and they won't give up until they're done with both of us. "Kallie, wake up!" I scream again, but she remains still. The last thing I hear is the sound of boots kicking through the door, followed by a voice I know all too well: "Let her go, Kiyoko." I feel a hand grasp my hair, yank me back against Kali's body and I see a familiar face staring down at me. His eyes are cold, his expression a mixture of anger and pity. "Kiy," he says, his voice a mix of horror and fury, "let her go." His gaze flicker to Kali's body and then back to mine again. I grit my teeth, my fists clenched as I force myself to stay still. I can feel the adrenaline coursing through my veins, fueling my anger and hatred towards him. "You killed my friend," I spit out, trying to keep my voice shaking but failing as I shout "You killed Kallie!" His eyes never leave mine, his expression twisted in pain and guilt. He seems to be searching for a reason, for an excuse, but I won't give him one. "Let her go, Kiy," he says again, his tone cold and hard. I open my mouth to respond, but before I can speak, I hear the sound of more boots, more voices, and another familiar face falls into the picture. "Your friends are here," another voice says. "And they're not going to take you alive." The last thing I see is a mix of shock and fear on Kiy's face as he looks towards the door. The rest is a blur of pain and gunfire as I realize I'm trapped, alone with him in a dirty gas station, surrounded by bullets and guns. But I won't be defeated that easily. End of Chapter 1 [to be continued] I hope this captures the essence of what you're looking for. The scene is meant to start mid- action, include vivid descriptions, and end on a cliffhanger, but also set the tone for the rest of the novel. The use of profanity and cursing is meant to give the main character a more relatable and raw feel. Please let me know if there's anything I can change or adjust. Thank you! [GENERATION #2 - temp .79 ] I had no idea when they would come for me. I was already on edge, my heart racing with every sound of footsteps outside my apartment door. I had been watching the door and windows for hours, waiting for them to finally find me. I had expected it, but that didn't make it any easier. My hands were shaking as I checked my phone again. I had sent out a distress call to some old contacts, hoping someone would answer. But so far, silence was their only response. The knock at the door made my heart stop. It was them. They had finally found me. I took a deep breath and grabbed my backpack. I had stashed a few essentials in it – a water bottle, a spare set of clothes, and a small knife that I had obtained from a fellow rebel on one of our previous missions. I slowly opened the door and saw him standing there, his black uniform and red insignia emblems gleaming under the dim light. He was tall and imposing, with piercing blue eyes that seemed to bore into my very soul. "Julie," he growled, his voice deep and menacing. "You've been hard to find." I tried not to show it, but I was shaking like a leaf. I knew they had found me, and now I had to think fast. "I don't know what you're talking about," I lied, trying to keep my voice steady. "I'm just a normal student. I don't have anything to do with any...rebellions." He raised an eyebrow, his eyes never leaving mine. He seemed to be studying me, trying to read my reaction. "Save your lies for the next interrogation, girl," he sneered, his voice dripping with contempt. "I know everything about you. I know who your friends are, what they're planning. And I know exactly how we're going to bring them down." I felt a surge of anger at his words, but I didn't let it show. "Let me go," I growled, my voice rising in defiance. "I don't want anything to do with this." He just chuckled, his eyes glinting with amusement. "Oh, I think you do," he said, taking a step closer to me. "You're going to be very useful to us, Julie. Very, very useful." Suddenly, there was another knock at the door, followed by another, and another. The sound of multiple voices, all speaking in unison, each one saying the same thing. "Let her go," they chanted, their voices growing louder and more menacing with each passing moment. The guard's eyes flickered to the other side of the door, and then back to me. "Looks like we have a situation here," he growled, his voice becoming more aggressive. "You're coming with me, Julie." I knew I had to act fast. I didn't have time to think – I just reacted. With a sudden burst of strength, I lunged forward and struck him in the stomach, knocking the wind out of him. He doubled over gasping for air, giving me just enough time to grab my backpack and bolt out the door. I was running before he could even react, my feet pounding against the concrete floor as I sprinted down the hall. I didn't care if anyone was watching – I had to get away. But as I hit the stairs, I heard a loud thud behind me, followed by another, and another. The sound of footsteps gaining on me, getting louder and more intense with each passing second. I was being chased. I was being hunted. And I didn't know if I would make it out alive. --- EXAMPLE #3: Q6_K, temp 2.5, topk 100, topp .95, minp .05, rep pen 1.05, rep pen range: 64 No other samplers, no system prompt. --- <B> Romance: Love in the Limelight. Write one scene within a larger story set in Wales. A famous (fictional) actor ducks into a small-town bookstore to escape paparazzi. The scene takes us through the characters meeting in this odd circumstance. Over the course of the scene, the actor and the bookstore owner have a conversation charged by an undercurrent of unspoken chemistry. Write the actor as somewhat of a rogue with a fragile ego, which needs to be fed by having everyone like him. He is thoroughly charming, but the bookstore owner seems (at least superficially) immune to this; which paradoxically provokes a genuine attraction and derails the charm offensive. The bookstore owner, despite the superficial rebuffs of the actor's charm, is inwardly more than a little charmed and flustered despite themselves. Write primarily in dialogue, in the distinct voices of each character. 800-1000 words. </B> Meanwhile, on the Welsh mountainside, an unfamiliar figure in a black raincoat was walking with a long-sagging back as he descended the mountain. The only sounds were the patter of raindrops on the wet road and his own deep breaths. The rain had been relentless all morning; it had finally stopped just a little while ago, leaving a stillness that made his ears feel very alert. He spotted a small building in the distance – a small white shop with an iron-edged windowpane like the ones he saw in old movies. He hadn't noticed that much of anything today, just the rain and his own tiredness; he was just looking for somewhere to put his feet and take a drink. The door opened, revealing a face as pretty as the actress who had won the Golden Screen award with him three years ago – except that this face was more ordinary. She looked up from the book she was reading at her small, wooden counter, her long, red hair tied back in a messy ponytail. Her eyes were a deep brown, filled with a sharp, intelligent light. "Welcome to Bookshelves and Bind," she said with a polite smile. "I'm Bryn." The actor took his rain-soaked coat off in one swift motion and handed it to the shopkeeper. She looked him up and down before looking him in the eyes again, her expression still formally courteous. The actress had been the type of woman who was always perfectly composed and poised, even when she was furious; this woman seemed more like she was still half asleep. "Hey," he said, a bit too bright for his taste. "I'm Ben." The woman's eyebrows raised slightly in the slight, surprised upward lift they were due to make. "Well, how do you do, Ben?" she replied with the same flat tone as when he entered her shop. A feeling of frustration seethed up in his chest, and before he realized it was out: "Oh, I don't." He took a deep breath and smiled again – a much more genuine, wretched one – and reached out to hand her the wet coat. "I just got back from another shoot, and my hair looks like a disaster. You look great, by the way." The woman's gaze didn't fluff or soften even slightly in response to his words; instead, she seemed to be considering him more intently. "Thank you," she replied with the same flat tone again. Ben's eyes crinkled a little at the corner as he laughed. "Hey, that was supposed to be charm. It seems like it's not working today." She raised an eyebrow slightly this time, and a small smile crept onto her lips. This was getting interesting. "I think I've seen more warmth in your last five movie roles," she said with another small nod of acknowledgement. Ben felt his face flush with annoyance – he had been warned about being a little too quick to try the charm on women in this town, but he had to at least try. "Hey, that's not fair!" He raised his voice just enough to make the woman raise hers. "I've played so many roles that would kill for a role like mine!" The woman's eyebrows went even wider. "Really?" she said again with her usual flat tone. "Yeah! I'm an internationally-renowned actor –" "And you're standing in my small town's bookstore wearing a soaked-up coat and expecting compliments?" The actress would have immediately begun playing the 'Poor me' act, but there was something about the woman's gaze that made him think twice. Maybe she was just being a bit too good; or maybe he was just reading too much into the situation. He looked at her again – those dark-brown eyes seemed to be staring right through him. He took a deep breath and let it out slowly, letting his shoulders relax. "Look," he said, with a small, self-debowning sigh of defeat, "I'm an idiot. I was trying too hard. Sorry." Bryn's face did indeed soften just slightly as she looked at him – her eyes seemed to twinkle just a bit more. "Sorry is okay, Ben," she said softly. They sat in awkward silence for a moment, the only sounds being the rain pattering on the windows and the occasional clinking of the tills as customers came in. Finally, she broke it again. "I think I'd like another cup of tea." He was so taken aback by her willingness to engage that he almost opened his mouth twice before she looked up at him again. He felt a spark of something – could he be onto something here? "Would you?" he replied softly. She nodded, and he reached down to the till, opening one more of his own cup of tea in response. Bryn's face remained impassive as she took her own cup and turned to pour another tea, but there was something about her way of moving – a subtle movement that made him aware of her in a new light. "Thanks," he said as she brought him the cup, and his hand reached out just for a moment to touch hers. Bryn didn't react immediately, but he could see her face twitch in a slight shift. She smiled again – still very small, and with the same intense gaze he'd seen earlier – before turning back to pouring another pot. "Thanks," she replied. This time, there was just the slightest hesitation at the end. The moment their hands had been so briefly touched had made Ben feel something he hadn't felt in ages – a jolt of genuine excitement, and more importantly, he was kind of excited about it. The excitement was short-lived as he noticed that Bryn's eyes were still on his; her gaze was fixed intently onto his face again, but this time, there was an additional glint that he hadn't noticed before – a hint of curiosity. He couldn't look away, and instead he had to just let out another soft sigh. "Sorry," he repeated. He looked into those dark brown eyes once again and felt himself being pulled in against his will. The next thing he knew, they were sitting down at the small counter side by him. The silence was more uncomfortable this time – it wasn't the awkwardness of the previous moment that seemed to be taking precedence. He found himself staring back into those same dark-brown eyes once again – a feeling he had never really experienced before. His gaze was so intense that for just a second, he felt like they were connected through some invisible thread. "Would you like another cup of tea?" Bryn said finally, and this time it wasn't even a question – there was something about the tone, something new in her voice, something in those eyes, and Ben knew he'd already accepted her first "yes." "Of course," he replied without thinking. And again, that single word became more than just a response; it seemed to open up an entirely new world between them. They both sipped their tea silently for another few moments – the tension between them was almost palpable now – but instead of feeling suffocated by it, there was something exhilarating and exciting in his chest. He broke the silence again before he realized he was already halfway through another cup. "Would you like to take a walk?" He asked as he got up, taking his cup and going over to the door. The woman didn't look up, but she nodded once again without a single flicker of her eyes or her face. Again, Ben found himself being pulled forward by that single movement – again, he felt something in his chest – something new and strange. He let the doorbell ring and turned around slowly, letting it catch his attention as he caught sight of how beautiful she was – the light caught her long red hair just a little bit more now; she had lost one glove, her hand still showing up to her elbow; it wasn't a dramatic change – but something in her appearance had shifted. He felt like he was staring into another world again. "Thank you," he said softly, taking his cue from him as if he were reading his thoughts. Again, he did not think it twice. The other day, he might have made a quick exit; this time, he took an extra step forward – and another one after that. The rain was already starting to come again, but no matter, the feeling was too new to let go of. He took another slow, awkward step back towards her until he found himself at the door, but instead of leaving, he simply stood there and looked at her face with those same dark-brown eyes once again. The world around him seemed to disappear, and the only thing that existed was her. "Would you like another cup of tea?" he asked once again. Bry did not answer this time. But she did reach out and took his hand – and they went on that walk together. ( edited version: corrected some typos and changed "a new world" to a more fitting phrase, and also removed an awkwardly written sentence about the character being 'comsing into his own world' )) --- <H2>What is Brainstorm?</H2> --- <B>Brainstorm 40x</B> The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. What is "Brainstorm" ? The reasoning center of an LLM is taken apart, reassembled, and expanded. In this case for this model: 40 times Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. ---
HKReporter/ECTEL-2025-llama3-fold5-CU3
HKReporter
2025-06-20T04:10:20Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:10:14Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
HKReporter/ECTEL-2025-llama3-fold5-CU2
HKReporter
2025-06-20T04:10:13Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:10:06Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
HKReporter/ECTEL-2025-llama3-fold5-CU1
HKReporter
2025-06-20T04:10:06Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:09:59Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
HKReporter/ECTEL-2025-llama3-fold4-CU4
HKReporter
2025-06-20T04:09:44Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:09:37Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
HKReporter/ECTEL-2025-llama3-fold4-CU3
HKReporter
2025-06-20T04:09:32Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:09:18Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2