modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-05-11 06:26:45
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
453 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-05-11 06:26:18
card
stringlengths
11
1.01M
grshaw8888/boson
grshaw8888
"2025-05-10T00:59:31Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-05-10T00:30:38Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: boson --- # Boson <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `boson` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "boson", "lora_weights": "https://huggingface.co/grshaw8888/boson/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('grshaw8888/boson', weight_name='lora.safetensors') image = pipeline('boson').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/grshaw8888/boson/discussions) to add images that show off what you’ve made with this LoRA.
xxmoeedxx/wav2vec2_aa
xxmoeedxx
"2025-05-10T00:53:08Z"
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "audio-classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
audio-classification
"2025-05-10T00:46:50Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cgifbribcgfbi/alpha32_r64_lr0.00002_Qwen2.5-72B-Ins_textbook_5000
cgifbribcgfbi
"2025-05-10T00:46:37Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "dataset:chemNLP/chemistry-bookshelves-merged", "base_model:zetasepic/Qwen2.5-72B-Instruct-abliterated", "base_model:adapter:zetasepic/Qwen2.5-72B-Instruct-abliterated", "license:other", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-05-09T19:52:45Z"
--- library_name: peft license: other base_model: zetasepic/Qwen2.5-72B-Instruct-abliterated tags: - axolotl - generated_from_trainer datasets: - chemNLP/chemistry-bookshelves-merged model-index: - name: alpha32_r64_lr0.00002_Qwen2.5-72B-Ins_textbook_5000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.9.1` ```yaml base_model: zetasepic/Qwen2.5-72B-Instruct-abliterated load_in_8bit: false load_in_4bit: true adapter: qlora wandb_name: Qwen2.5-72B-Ins_outputs_axolotl_ft_alpha32_r64_lr0.00002_Qwen2.5-72B-Ins_textbook_5000 output_dir: ./outputs/out/Qwen2.5-72B-Ins_outputs_axolotl_ft_alpha32_r64_lr0.00002_Qwen2.5-72B-Ins_textbook_5000 hub_model_id: cgifbribcgfbi/alpha32_r64_lr0.00002_Qwen2.5-72B-Ins_textbook_5000 tokenizer_type: AutoTokenizer push_dataset_to_hub: strict: false datasets: - path: chemNLP/chemistry-bookshelves-merged type: completion dataset_prepared_path: last_run_prepared val_set_size: 0.04 save_safetensors: true sequence_len: 2700 sample_packing: true pad_to_sequence_len: true lora_r: 64 lora_alpha: 32 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true wandb_mode: wandb_project: finetune-sweep wandb_entity: gpoisjgqetpadsfke wandb_watch: wandb_run_id: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 4 # This will be automatically adjusted based on available GPU memory num_epochs: 4 optimizer: adamw_torch_fused lr_scheduler: cosine learning_rate: 0.00002 train_on_inputs: false group_by_length: true bf16: true tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true logging_steps: 1 flash_attention: true warmup_steps: 10 evals_per_epoch: 3 saves_per_epoch: 1 weight_decay: 0.01 fsdp: - full_shard - auto_wrap fsdp_config: fsdp_limit_all_gathers: true fsdp_sync_module_states: true fsdp_offload_params: false fsdp_use_orig_params: false fsdp_cpu_ram_efficient_loading: true fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer fsdp_state_dict_type: FULL_STATE_DICT fsdp_sharding_strategy: FULL_SHARD special_tokens: pad_token: <|finetune_right_pad_id|> ``` </details><br> # alpha32_r64_lr0.00002_Qwen2.5-72B-Ins_textbook_5000 This model is a fine-tuned version of [zetasepic/Qwen2.5-72B-Instruct-abliterated](https://huggingface.co/zetasepic/Qwen2.5-72B-Instruct-abliterated) on the chemNLP/chemistry-bookshelves-merged dataset. It achieves the following results on the evaluation set: - Loss: 0.8187 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8113 | 0.0032 | 1 | 0.9217 | | 0.8094 | 0.3354 | 106 | 0.8685 | | 1.001 | 0.6709 | 212 | 0.8451 | | 0.5993 | 1.0063 | 318 | 0.8363 | | 0.8479 | 1.3418 | 424 | 0.8301 | | 1.0613 | 1.6772 | 530 | 0.8253 | | 0.6089 | 2.0127 | 636 | 0.8239 | | 1.0859 | 2.3481 | 742 | 0.8211 | | 0.8476 | 2.6835 | 848 | 0.8197 | | 0.794 | 3.0190 | 954 | 0.8201 | | 0.933 | 3.3544 | 1060 | 0.8187 | | 1.0366 | 3.6899 | 1166 | 0.8187 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF
mradermacher
"2025-05-10T00:44:44Z"
0
1
transformers
[ "transformers", "gguf", "en", "base_model:rd211/Qwen2.5-7B-Instruct-HardLambda0.1-220", "base_model:quantized:rd211/Qwen2.5-7B-Instruct-HardLambda0.1-220", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-09T22:56:18Z"
--- base_model: rd211/Qwen2.5-7B-Instruct-HardLambda0.1-220 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/rd211/Qwen2.5-7B-Instruct-HardLambda0.1-220 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
oukacisarah/fine_tunning_Llama3.2_3B_with_DziriFake
oukacisarah
"2025-05-10T00:43:47Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-05-10T00:43:42Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf
RichardErkhov
"2025-05-10T00:43:41Z"
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-09T21:54:55Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) labor-llama3-8b-Instruct-20241002 - GGUF - Model creator: https://huggingface.co/clinno/ - Original model: https://huggingface.co/clinno/labor-llama3-8b-Instruct-20241002/ | Name | Quant method | Size | | ---- | ---- | ---- | | [labor-llama3-8b-Instruct-20241002.Q2_K.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q2_K.gguf) | Q2_K | 2.96GB | | [labor-llama3-8b-Instruct-20241002.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [labor-llama3-8b-Instruct-20241002.IQ3_S.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.IQ3_S.gguf) | IQ3_S | 3.43GB | | [labor-llama3-8b-Instruct-20241002.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [labor-llama3-8b-Instruct-20241002.IQ3_M.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.IQ3_M.gguf) | IQ3_M | 3.52GB | | [labor-llama3-8b-Instruct-20241002.Q3_K.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q3_K.gguf) | Q3_K | 3.74GB | | [labor-llama3-8b-Instruct-20241002.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [labor-llama3-8b-Instruct-20241002.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [labor-llama3-8b-Instruct-20241002.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [labor-llama3-8b-Instruct-20241002.Q4_0.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q4_0.gguf) | Q4_0 | 4.34GB | | [labor-llama3-8b-Instruct-20241002.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [labor-llama3-8b-Instruct-20241002.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [labor-llama3-8b-Instruct-20241002.Q4_K.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q4_K.gguf) | Q4_K | 4.58GB | | [labor-llama3-8b-Instruct-20241002.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [labor-llama3-8b-Instruct-20241002.Q4_1.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q4_1.gguf) | Q4_1 | 4.78GB | | [labor-llama3-8b-Instruct-20241002.Q5_0.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q5_0.gguf) | Q5_0 | 5.21GB | | [labor-llama3-8b-Instruct-20241002.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [labor-llama3-8b-Instruct-20241002.Q5_K.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q5_K.gguf) | Q5_K | 5.34GB | | [labor-llama3-8b-Instruct-20241002.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [labor-llama3-8b-Instruct-20241002.Q5_1.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q5_1.gguf) | Q5_1 | 5.65GB | | [labor-llama3-8b-Instruct-20241002.Q6_K.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q6_K.gguf) | Q6_K | 6.14GB | | [labor-llama3-8b-Instruct-20241002.Q8_0.gguf](https://huggingface.co/RichardErkhov/clinno_-_labor-llama3-8b-Instruct-20241002-gguf/blob/main/labor-llama3-8b-Instruct-20241002.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: other base_model: NousResearch/Meta-Llama-3-8B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: sft results: [] datasets: - clinno/labor20240807-json --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the identity and the labor20240807 datasets. It achieves the following results on the evaluation set: - Loss: 1.1235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | 0.9225 | 3.6331 | 1000 | 1.1293 | | 0.7145 | 7.2661 | 2000 | 1.1055 | | 0.6401 | 10.8992 | 3000 | 1.1234 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
shayanfirouzian/Qwen2.5-1.5B_SocialReasoning
shayanfirouzian
"2025-05-10T00:40:36Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-10T00:40:27Z"
--- base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** shayanfirouzian - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
UNISG-MCS/NLP
UNISG-MCS
"2025-05-10T00:37:59Z"
31
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-33b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-33b-instruct", "license:other", "region:us" ]
null
"2025-05-07T11:33:11Z"
--- library_name: peft license: other base_model: deepseek-ai/deepseek-coder-33b-instruct tags: - generated_from_trainer model-index: - name: NLP results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLP This model is a fine-tuned version of [deepseek-ai/deepseek-coder-33b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 300 | 1.5509 | | 1.7095 | 2.0 | 600 | 1.4852 | | 1.7095 | 3.0 | 900 | 1.4692 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
timotheaprosperou/timotheaprosperous
timotheaprosperou
"2025-05-10T00:37:17Z"
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
"2025-05-10T00:37:17Z"
--- license: artistic-2.0 ---
nijatmammadov/model2
nijatmammadov
"2025-05-10T00:35:06Z"
0
0
null
[ "safetensors", "bert", "model_hub_mixin", "pytorch_model_hub_mixin", "text-classification", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "region:us" ]
text-classification
"2025-05-08T22:33:04Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin - model_hub_mixin license: apache-2.0 base_model: - google-bert/bert-base-uncased pipeline_tag: text-classification --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
IIEleven11/mn-rocinante-18.5b-exl2-8bpw
IIEleven11
"2025-05-10T00:33:47Z"
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:DavidAU/MN-Rocinante-18.5B-v1.1-Instruct", "base_model:quantized:DavidAU/MN-Rocinante-18.5B-v1.1-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
"2025-05-04T23:53:39Z"
--- base_model: - DavidAU/MN-Rocinante-18.5B-v1.1-Instruct library_name: transformers tags: - mergekit - merge pipeline_tag: text-generation --- # MN-Rocinante-18.5B-v1.1-Instruct EXL2 8BPW This repo contains EXL2 8bpw version of MN-Rocinante-18.5B-v1.1-Instruct . Please go to: [ https://huggingface.co/DavidAU/MN-Rocinante-18.5B-v1.1-Story-Wizard-ED1-Instruct-GGUF ] Additional Quants: Imatrix GGUFS: [ https://huggingface.co/mradermacher/MN-Rocinante-18.5B-v1.1-Instruct-i1-GGUF ] GGUFS: [ https://huggingface.co/mradermacher/MN-Rocinante-18.5B-v1.1-Instruct-GGUF ] --- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * G:/11B/Rocinante-12B-v1.1 * g:/11b/Mistral-Nemo-Instruct-2407-12B
Franklinmorillo/replicate-lora
Franklinmorillo
"2025-05-10T00:25:37Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-05-10T00:25:23Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- Confident, chair and portrait of entrepreneur sitting happy with a smile and crossed legs isolated . output: url: images/envato-labs-image-edit.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: Franklin --- # lora <Gallery /> ## Trigger words You should use `Franklin` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Franklinmorillo/replicate-lora/tree/main) them in the Files & versions tab.
SalomonMetre13/nllb-sna-en-mt-v1
SalomonMetre13
"2025-05-10T00:24:59Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "translation", "base_model:SalomonMetre13/nllb-sna-en-mt-v1", "base_model:finetune:SalomonMetre13/nllb-sna-en-mt-v1", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2025-05-09T12:17:25Z"
--- library_name: transformers license: cc-by-nc-4.0 base_model: SalomonMetre13/nllb-sna-en-mt-v1 tags: - generated_from_trainer model-index: - name: nllb-sna-en-mt-v1 results: [] pipeline_tag: translation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb-sna-en-mt-v1 This model is a fine-tuned version of [SalomonMetre13/nllb-sna-en-mt-v1](https://huggingface.co/SalomonMetre13/nllb-sna-en-mt-v1) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1462 - eval_runtime: 185.4131 - eval_samples_per_second: 33.477 - eval_steps_per_second: 8.37 - epoch: 0.7877 - step: 11000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
tachiwin/multilingual_gguf_4km_a
tachiwin
"2025-05-10T00:23:35Z"
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:tachiwin/pretrained_multilingual_instruct", "base_model:quantized:tachiwin/pretrained_multilingual_instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-10T00:22:20Z"
--- base_model: tachiwin/pretrained_multilingual_instruct tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** tachiwin - **License:** apache-2.0 - **Finetuned from model :** tachiwin/pretrained_multilingual_instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
juhw/q4103
juhw
"2025-05-10T00:20:44Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-10T00:17:34Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ASethi04/google-gemma-2-9b-hellaswag-first-lora-4-0.0001-same-prompt-template
ASethi04
"2025-05-10T00:20:16Z"
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-2-9b", "base_model:finetune:google/gemma-2-9b", "endpoints_compatible", "region:us" ]
null
"2025-05-09T19:05:57Z"
--- base_model: google/gemma-2-9b library_name: transformers model_name: google-gemma-2-9b-hellaswag-first-lora-4-0.0001-same-prompt-template tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for google-gemma-2-9b-hellaswag-first-lora-4-0.0001-same-prompt-template This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/google-gemma-2-9b-hellaswag-first-lora-4-0.0001-same-prompt-template", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/63257kbl) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nitindominicrai/sd35_anthracnose
nitindominicrai
"2025-05-10T00:19:20Z"
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "sd3", "sd3-diffusers", "base_model:stabilityai/stable-diffusion-3.5-medium", "base_model:adapter:stabilityai/stable-diffusion-3.5-medium", "license:other", "region:us" ]
text-to-image
"2025-05-09T23:00:10Z"
--- base_model: stabilityai/stable-diffusion-3.5-medium library_name: diffusers license: other instance_prompt: A dense canopy of watermelon plants with clusters of leaves displaying nbd anthracnose (Colletotrichum orbiculare) disease, intricate details, natural lighting, highly realistic widget: - text: a photo of nbd anthracnose disease in watermelon leaves output: url: image_0.png - text: a photo of nbd anthracnose disease in watermelon leaves output: url: image_1.png - text: a photo of nbd anthracnose disease in watermelon leaves output: url: image_2.png - text: a photo of nbd anthracnose disease in watermelon leaves output: url: image_3.png - text: a photo of nbd anthracnose disease in watermelon leaves output: url: image_4.png tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - sd3 - sd3-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3 DreamBooth LoRA - nitindominicrai/sd35_anthracnose <Gallery /> ## Model description These are nitindominicrai/sd35_anthracnose DreamBooth LoRA weights for stabilityai/stable-diffusion-3.5-medium. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `A dense canopy of watermelon plants with clusters of leaves displaying nbd anthracnose (Colletotrichum orbiculare) disease, intricate details, natural lighting, highly realistic` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](nitindominicrai/sd35_anthracnose/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3.5-medium, torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('nitindominicrai/sd35_anthracnose', weight_name='pytorch_lora_weights.safetensors') image = pipeline('a photo of nbd anthracnose disease in watermelon leaves').images[0] ``` ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/nitindominicrai/sd35_anthracnose/blob/main/diffusers_lora_weights.safetensors)**. - Rename it and place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
codys12/bitnet-r1-8b
codys12
"2025-05-10T00:18:04Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "bitnet", "region:us" ]
text-generation
"2025-05-10T00:12:18Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TOTORONG/TOTORONG-Q5_K_S-GGUF
TOTORONG
"2025-05-10T00:17:46Z"
0
0
transformers
[ "transformers", "gguf", "llama-factory", "llama-cpp", "gguf-my-repo", "base_model:TOTORONG/TOTORONG", "base_model:quantized:TOTORONG/TOTORONG", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-10T00:16:04Z"
--- base_model: TOTORONG/TOTORONG library_name: transformers tags: - llama-factory - llama-cpp - gguf-my-repo --- # TOTORONG/TOTORONG-Q5_K_S-GGUF This model was converted to GGUF format from [`TOTORONG/TOTORONG`](https://huggingface.co/TOTORONG/TOTORONG) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/TOTORONG/TOTORONG) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo TOTORONG/TOTORONG-Q5_K_S-GGUF --hf-file totorong-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo TOTORONG/TOTORONG-Q5_K_S-GGUF --hf-file totorong-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo TOTORONG/TOTORONG-Q5_K_S-GGUF --hf-file totorong-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo TOTORONG/TOTORONG-Q5_K_S-GGUF --hf-file totorong-q5_k_s.gguf -c 2048 ```
shanchen/limo-dscombo-20250509_172112
shanchen
"2025-05-10T00:16:21Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T23:41:02Z"
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B library_name: transformers model_name: limo-dscombo-20250509_172112 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for limo-dscombo-20250509_172112 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shanchen/limo-dscombo-20250509_172112", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bitterman/s1/runs/nhg5twgu) This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dim/2025_05_08_13_21_20_019296_checkpoint-14022
dim
"2025-05-10T00:10:45Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
"2025-05-09T22:54:38Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jruaechalar/firma_veinticinco
jruaechalar
"2025-05-10T00:09:22Z"
0
0
diffusers
[ "diffusers", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2025-05-09T23:33:34Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hamma-16/CAMeLBERT-Uniform-Soup
Hamma-16
"2025-05-10T00:08:25Z"
7
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-05-08T17:42:20Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hcstubbe/Qwen2.5-3B-Instruct-bnb-4bit
hcstubbe
"2025-05-10T00:00:04Z"
0
0
transformers
[ "transformers", "qwen2", "feature-extraction", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-05-09T23:58:27Z"
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** hcstubbe - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
async0x42/Qwen3-1.7B-exl3_4.5bpw
async0x42
"2025-05-09T23:59:34Z"
0
0
null
[ "safetensors", "qwen3", "unsloth", "base_model:Qwen/Qwen3-1.7B", "base_model:quantized:Qwen/Qwen3-1.7B", "exl3", "region:us" ]
null
"2025-05-09T23:58:49Z"
--- tags: - unsloth base_model: - Qwen/Qwen3-1.7B --- # Qwen3-1.7B ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-1.7B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 1.7B - Number of Paramaters (Non-Embedding): 1.4B - Number of Layers: 28 - Number of Attention Heads (GQA): 16 for Q and 8 for KV - Context Length: 32,768 For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-1.7B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint: - vLLM: ```shell vllm serve Qwen/Qwen3-1.7B --enable-reasoning --reasoning-parser deepseek_r1 ``` - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-1.7B --reasoning-parser deepseek-r1 ``` ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by vLLM and SGLang. > Please refer to [our documentation](https://qwen.readthedocs.io/) for more details. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-1.7B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > **Note** > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-1.7B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3, title = {Qwen3}, url = {https://qwenlm.github.io/blog/qwen3/}, author = {Qwen Team}, month = {April}, year = {2025} } ```
OpenTO/LatentPhysx
OpenTO
"2025-05-09T23:59:26Z"
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
"2025-05-09T23:58:55Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
ethandudley/ethandudley6
ethandudley
"2025-05-09T23:59:13Z"
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
"2025-05-09T23:59:13Z"
--- license: artistic-2.0 ---
Grogros/Llama-3.2-1B-OurInstruct-distillation-Alpaca-3.0-AlpacaRefuseSmooth
Grogros
"2025-05-09T23:53:19Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:mveroe/Llama-3.2-1B-OurInstruct", "base_model:finetune:mveroe/Llama-3.2-1B-OurInstruct", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T20:50:04Z"
--- library_name: transformers license: llama3.2 base_model: mveroe/Llama-3.2-1B-OurInstruct tags: - generated_from_trainer model-index: - name: Llama-3.2-1B-OurInstruct-distillation-Alpaca-3.0-AlpacaRefuseSmooth results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.2-1B-OurInstruct-distillation-Alpaca-3.0-AlpacaRefuseSmooth This model is a fine-tuned version of [mveroe/Llama-3.2-1B-OurInstruct](https://huggingface.co/mveroe/Llama-3.2-1B-OurInstruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Use adafactor and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2000 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.2.0a0+81ea7a4 - Datasets 3.5.0 - Tokenizers 0.21.1
mveroe/phi-2-ceCode-OurInstruct
mveroe
"2025-05-09T23:51:11Z"
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "generated_from_trainer", "conversational", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T20:19:21Z"
--- library_name: transformers license: mit base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: phi-2-ceCode-OurInstruct results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-ceCode-OurInstruct This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAFACTOR and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2000 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
shanchen/ds-limo-te-250
shanchen
"2025-05-09T23:50:50Z"
86
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-28T22:01:05Z"
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B library_name: transformers model_name: ds-limo-te-250 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for ds-limo-te-250 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shanchen/ds-limo-te-250", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bitterman/s1/runs/s59oi6p3) This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Naga1289/GF_AndyWarhol
Naga1289
"2025-05-09T23:50:48Z"
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2025-05-09T23:48:58Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Grayx/jpii_27
Grayx
"2025-05-09T23:48:04Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T23:46:41Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Naga1289/GF_HenriMatisse
Naga1289
"2025-05-09T23:45:01Z"
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2025-05-09T23:43:04Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Szeth99/efrat
Szeth99
"2025-05-09T23:44:39Z"
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-05-09T23:43:24Z"
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: efratazz license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # efratagg A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `efratazz` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
YWZBrandon/google_flan-t5-base_semantic_3_clusters_1_full_upsample1000
YWZBrandon
"2025-05-09T23:42:25Z"
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-05-09T23:41:57Z"
--- library_name: transformers license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: google_flan-t5-base_semantic_3_clusters_1_full_upsample1000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # google_flan-t5-base_semantic_3_clusters_1_full_upsample1000 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 256 - total_eval_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
ivnle/qwen2.5-1.5b-instruct_codex-intervals-20_lora_r32-a128_sft
ivnle
"2025-05-09T23:42:14Z"
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
"2025-05-09T23:30:28Z"
--- base_model: Qwen/Qwen2.5-1.5B-Instruct library_name: transformers model_name: qwen2.5-1.5b-instruct_codex-intervals-20_lora_r32-a128_sft tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-1.5b-instruct_codex-intervals-20_lora_r32-a128_sft This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ivnle/qwen2.5-1.5b-instruct_codex-intervals-20_lora_r32-a128_sft", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ivnle/huggingface/runs/3pd3j3ba) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf
RichardErkhov
"2025-05-09T23:41:01Z"
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-09T22:59:03Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3-8b-math-sft-full-5-epoch-4 - GGUF - Model creator: https://huggingface.co/Dynosaur/ - Original model: https://huggingface.co/Dynosaur/llama3-8b-math-sft-full-5-epoch-4/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3-8b-math-sft-full-5-epoch-4.Q2_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q2_K.gguf) | Q2_K | 2.96GB | | [llama3-8b-math-sft-full-5-epoch-4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama3-8b-math-sft-full-5-epoch-4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama3-8b-math-sft-full-5-epoch-4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama3-8b-math-sft-full-5-epoch-4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama3-8b-math-sft-full-5-epoch-4.Q3_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q3_K.gguf) | Q3_K | 3.74GB | | [llama3-8b-math-sft-full-5-epoch-4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama3-8b-math-sft-full-5-epoch-4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama3-8b-math-sft-full-5-epoch-4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama3-8b-math-sft-full-5-epoch-4.Q4_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama3-8b-math-sft-full-5-epoch-4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama3-8b-math-sft-full-5-epoch-4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama3-8b-math-sft-full-5-epoch-4.Q4_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q4_K.gguf) | Q4_K | 4.58GB | | [llama3-8b-math-sft-full-5-epoch-4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama3-8b-math-sft-full-5-epoch-4.Q4_1.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama3-8b-math-sft-full-5-epoch-4.Q5_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama3-8b-math-sft-full-5-epoch-4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama3-8b-math-sft-full-5-epoch-4.Q5_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q5_K.gguf) | Q5_K | 5.34GB | | [llama3-8b-math-sft-full-5-epoch-4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama3-8b-math-sft-full-5-epoch-4.Q5_1.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama3-8b-math-sft-full-5-epoch-4.Q6_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q6_K.gguf) | Q6_K | 6.14GB | | [llama3-8b-math-sft-full-5-epoch-4.Q8_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-full-5-epoch-4-gguf/blob/main/llama3-8b-math-sft-full-5-epoch-4.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: llama3 base_model: Dynosaur/llama3-8b-math-sft tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - Dynosaur/math-sft-full-5 model-index: - name: llama3-8b-math-sft-full-5-epoch-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-math-sft-full-5-epoch-4 This model is a fine-tuned version of [Dynosaur/llama3-8b-math-sft](https://huggingface.co/Dynosaur/llama3-8b-math-sft) on the Dynosaur/math-sft-full-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
shanchen/ds-limo-1.1-100
shanchen
"2025-05-09T23:39:59Z"
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-28T20:10:16Z"
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B library_name: transformers model_name: ds-limo-1.1-100 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for ds-limo-1.1-100 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shanchen/ds-limo-1.1-100", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bitterman/s1/runs/06ocbo21) This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
YWZBrandon/google_flan-t5-base_semantic_5_clusters_1_full_upsample1000
YWZBrandon
"2025-05-09T23:39:30Z"
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-05-09T23:39:07Z"
--- library_name: transformers license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: google_flan-t5-base_semantic_5_clusters_1_full_upsample1000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # google_flan-t5-base_semantic_5_clusters_1_full_upsample1000 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 256 - total_eval_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
ma921/gpt2-large_dpo_imdb_noise40_epoch10
ma921
"2025-05-09T23:35:04Z"
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:ma921/gpt2-large-sft-imdb", "base_model:finetune:ma921/gpt2-large-sft-imdb", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T23:34:07Z"
--- library_name: transformers license: mit base_model: ma921/gpt2-large-sft-imdb tags: - generated_from_trainer model-index: - name: gpt2-large_dpo_imdb_noise40_epoch10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-large_dpo_imdb_noise40_epoch10 This model is a fine-tuned version of [ma921/gpt2-large-sft-imdb](https://huggingface.co/ma921/gpt2-large-sft-imdb) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Naga1289/RECE_HenriMatisse
Naga1289
"2025-05-09T23:34:53Z"
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2025-05-09T23:33:20Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf
RichardErkhov
"2025-05-09T23:34:02Z"
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-09T19:55:22Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3-8b-math-sft-subtask-0-new - GGUF - Model creator: https://huggingface.co/Dynosaur/ - Original model: https://huggingface.co/Dynosaur/llama3-8b-math-sft-subtask-0-new/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3-8b-math-sft-subtask-0-new.Q2_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q2_K.gguf) | Q2_K | 2.96GB | | [llama3-8b-math-sft-subtask-0-new.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama3-8b-math-sft-subtask-0-new.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama3-8b-math-sft-subtask-0-new.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama3-8b-math-sft-subtask-0-new.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama3-8b-math-sft-subtask-0-new.Q3_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q3_K.gguf) | Q3_K | 3.74GB | | [llama3-8b-math-sft-subtask-0-new.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama3-8b-math-sft-subtask-0-new.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama3-8b-math-sft-subtask-0-new.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama3-8b-math-sft-subtask-0-new.Q4_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama3-8b-math-sft-subtask-0-new.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama3-8b-math-sft-subtask-0-new.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama3-8b-math-sft-subtask-0-new.Q4_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q4_K.gguf) | Q4_K | 4.58GB | | [llama3-8b-math-sft-subtask-0-new.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama3-8b-math-sft-subtask-0-new.Q4_1.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama3-8b-math-sft-subtask-0-new.Q5_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama3-8b-math-sft-subtask-0-new.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama3-8b-math-sft-subtask-0-new.Q5_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q5_K.gguf) | Q5_K | 5.34GB | | [llama3-8b-math-sft-subtask-0-new.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama3-8b-math-sft-subtask-0-new.Q5_1.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama3-8b-math-sft-subtask-0-new.Q6_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q6_K.gguf) | Q6_K | 6.14GB | | [llama3-8b-math-sft-subtask-0-new.Q8_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-0-new-gguf/blob/main/llama3-8b-math-sft-subtask-0-new.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: llama3 base_model: Dynosaur/llama3-8b-math-sft tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - Dynosaur/math-sft-subtask-0 model-index: - name: llama3-8b-math-sft-subtask-0-new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-math-sft-subtask-0-new This model is a fine-tuned version of [Dynosaur/llama3-8b-math-sft](https://huggingface.co/Dynosaur/llama3-8b-math-sft) on the Dynosaur/math-sft-subtask-0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
Grogros/phi-2-safecoderCode-OurSafecoder
Grogros
"2025-05-09T23:32:44Z"
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "generated_from_trainer", "conversational", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T20:32:41Z"
--- library_name: transformers license: mit base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: phi-2-safecoderCode-OurSafecoder results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-safecoderCode-OurSafecoder This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Use adafactor and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2000 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.2.0a0+81ea7a4 - Datasets 3.5.0 - Tokenizers 0.21.1
cantillation/Teamim-IvritAI-large-v3-turbo-new_WeightDecay-0.005_Augmented_date-07-05-2025
cantillation
"2025-05-09T23:30:59Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "he", "base_model:ivrit-ai/whisper-large-v3-turbo", "base_model:finetune:ivrit-ai/whisper-large-v3-turbo", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-05-07T16:18:34Z"
--- library_name: transformers language: - he license: apache-2.0 base_model: ivrit-ai/whisper-large-v3-turbo tags: - hf-asr-leaderboard - generated_from_trainer metrics: - wer model-index: - name: he-cantillation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # he-cantillation This model is a fine-tuned version of [ivrit-ai/whisper-large-v3-turbo](https://huggingface.co/ivrit-ai/whisper-large-v3-turbo) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.0244 - Wer: 97.8059 - Avg Precision Exact: 0.0463 - Avg Recall Exact: 0.1014 - Avg F1 Exact: 0.0598 - Avg Precision Letter Shift: 0.0622 - Avg Recall Letter Shift: 0.1383 - Avg F1 Letter Shift: 0.0805 - Avg Precision Word Level: 0.0777 - Avg Recall Word Level: 0.1656 - Avg F1 Word Level: 0.0970 - Avg Precision Word Shift: 0.1542 - Avg Recall Word Shift: 0.3497 - Avg F1 Word Shift: 0.1988 - Precision Median Exact: 0.0227 - Recall Median Exact: 0.0625 - F1 Median Exact: 0.0357 - Precision Max Exact: 1.0 - Recall Max Exact: 1.0 - F1 Max Exact: 1.0 - Precision Min Exact: 0.0 - Recall Min Exact: 0.0 - F1 Min Exact: 0.0 - Precision Min Letter Shift: 0.0 - Recall Min Letter Shift: 0.0 - F1 Min Letter Shift: 0.0 - Precision Min Word Level: 0.0 - Recall Min Word Level: 0.0 - F1 Min Word Level: 0.0 - Precision Min Word Shift: 0.0 - Recall Min Word Shift: 0.0 - F1 Min Word Shift: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 2 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 60000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Avg Precision Exact | Avg Recall Exact | Avg F1 Exact | Avg Precision Letter Shift | Avg Recall Letter Shift | Avg F1 Letter Shift | Avg Precision Word Level | Avg Recall Word Level | Avg F1 Word Level | Avg Precision Word Shift | Avg Recall Word Shift | Avg F1 Word Shift | Precision Median Exact | Recall Median Exact | F1 Median Exact | Precision Max Exact | Recall Max Exact | F1 Max Exact | Precision Min Exact | Recall Min Exact | F1 Min Exact | Precision Min Letter Shift | Recall Min Letter Shift | F1 Min Letter Shift | Precision Min Word Level | Recall Min Word Level | F1 Min Word Level | Precision Min Word Shift | Recall Min Word Shift | F1 Min Word Shift | |:-------------:|:------:|:-----:|:---------------:|:--------:|:-------------------:|:----------------:|:------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:------------------------:|:---------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:------------------------:|:---------------------:|:-----------------:| | No log | 0.0002 | 1 | 7.1581 | 109.4023 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0.0568 | 0.3754 | 2500 | 2.0970 | 96.9729 | 0.0613 | 0.0775 | 0.0671 | 0.0842 | 0.1078 | 0.0923 | 0.1037 | 0.1324 | 0.1135 | 0.2206 | 0.2995 | 0.2485 | 0.0345 | 0.05 | 0.04 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0314 | 0.7508 | 5000 | 2.3109 | 96.4492 | 0.0709 | 0.0953 | 0.0792 | 0.0932 | 0.1273 | 0.1045 | 0.1089 | 0.1502 | 0.1225 | 0.2270 | 0.3279 | 0.2605 | 0.0357 | 0.0588 | 0.0435 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.018 | 1.1261 | 7500 | 2.5849 | 97.4587 | 0.0458 | 0.0776 | 0.0548 | 0.0637 | 0.1096 | 0.0766 | 0.0775 | 0.1319 | 0.0926 | 0.1673 | 0.2959 | 0.2038 | 0.025 | 0.0455 | 0.0331 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0075 | 1.5015 | 10000 | 3.1851 | 97.9489 | 0.0397 | 0.0824 | 0.0507 | 0.0539 | 0.1151 | 0.0693 | 0.0664 | 0.1414 | 0.0847 | 0.1476 | 0.3210 | 0.1893 | 0.0204 | 0.05 | 0.0317 | 1.0 | 1.0 | 0.8571 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0153 | 1.8769 | 12500 | 2.7215 | 96.5921 | 0.0645 | 0.0950 | 0.0740 | 0.0868 | 0.1307 | 0.1001 | 0.1080 | 0.1624 | 0.1228 | 0.2293 | 0.3653 | 0.2688 | 0.0323 | 0.0625 | 0.0417 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0108 | 2.2523 | 15000 | 3.2672 | 97.2092 | 0.0544 | 0.1084 | 0.0685 | 0.0710 | 0.1448 | 0.0901 | 0.0848 | 0.1708 | 0.1064 | 0.1695 | 0.3629 | 0.2190 | 0.0270 | 0.0714 | 0.0392 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0077 | 2.6276 | 17500 | 3.2673 | 97.1625 | 0.0551 | 0.1014 | 0.0684 | 0.0749 | 0.1379 | 0.0922 | 0.0895 | 0.1631 | 0.1091 | 0.1845 | 0.3500 | 0.2292 | 0.0278 | 0.0625 | 0.0392 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0179 | 3.0030 | 20000 | 3.2682 | 97.2967 | 0.0516 | 0.0918 | 0.0634 | 0.0712 | 0.1274 | 0.0875 | 0.0869 | 0.1538 | 0.1061 | 0.1856 | 0.3430 | 0.2305 | 0.0278 | 0.0625 | 0.0385 | 1.0 | 0.8125 | 0.6667 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0066 | 3.3784 | 22500 | 3.5714 | 97.3726 | 0.0454 | 0.0955 | 0.0584 | 0.0621 | 0.1316 | 0.0796 | 0.0742 | 0.1559 | 0.0944 | 0.1542 | 0.3349 | 0.1987 | 0.0227 | 0.0556 | 0.0345 | 0.8571 | 1.0 | 0.8571 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0042 | 3.7538 | 25000 | 3.8114 | 97.9270 | 0.0404 | 0.0894 | 0.0525 | 0.0558 | 0.1247 | 0.0726 | 0.0677 | 0.1516 | 0.0883 | 0.1420 | 0.3347 | 0.1890 | 0.0222 | 0.0588 | 0.0345 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0014 | 4.1291 | 27500 | 4.0215 | 96.8212 | 0.0506 | 0.1030 | 0.0654 | 0.0667 | 0.1376 | 0.0863 | 0.0801 | 0.1627 | 0.1023 | 0.1651 | 0.3500 | 0.2142 | 0.0263 | 0.0667 | 0.0385 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0021 | 4.5045 | 30000 | 4.0509 | 98.0262 | 0.0395 | 0.0905 | 0.0523 | 0.0545 | 0.1243 | 0.0716 | 0.0666 | 0.1525 | 0.0873 | 0.1443 | 0.3387 | 0.1904 | 0.0217 | 0.0625 | 0.0345 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0028 | 4.8799 | 32500 | 3.9264 | 98.0685 | 0.0381 | 0.0844 | 0.0492 | 0.0513 | 0.1175 | 0.0667 | 0.0632 | 0.1427 | 0.0812 | 0.1319 | 0.3158 | 0.1729 | 0.0 | 0.0 | 0.0 | 0.7778 | 1.0 | 0.8750 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0027 | 5.2553 | 35000 | 4.5942 | 97.9532 | 0.0446 | 0.0991 | 0.0585 | 0.0593 | 0.1323 | 0.0770 | 0.0731 | 0.1586 | 0.0924 | 0.1465 | 0.3347 | 0.1893 | 0.0227 | 0.0667 | 0.0348 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0019 | 5.6306 | 37500 | 4.1899 | 97.8249 | 0.0437 | 0.1073 | 0.0581 | 0.0569 | 0.1405 | 0.0753 | 0.0679 | 0.1646 | 0.0884 | 0.1375 | 0.3415 | 0.1811 | 0.0204 | 0.0588 | 0.0331 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0011 | 6.0060 | 40000 | 4.5501 | 98.1939 | 0.0396 | 0.0917 | 0.0528 | 0.0555 | 0.1270 | 0.0728 | 0.0689 | 0.1550 | 0.0890 | 0.1460 | 0.3409 | 0.1913 | 0.0196 | 0.0588 | 0.0317 | 0.7778 | 1.0 | 0.8235 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0008 | 6.3814 | 42500 | 4.3012 | 97.9882 | 0.0421 | 0.0935 | 0.0552 | 0.0577 | 0.1284 | 0.0753 | 0.0710 | 0.1548 | 0.0909 | 0.1450 | 0.3337 | 0.1898 | 0.0217 | 0.0588 | 0.0339 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0007 | 6.7568 | 45000 | 4.2077 | 97.8409 | 0.0456 | 0.0924 | 0.0583 | 0.0623 | 0.1262 | 0.0792 | 0.0756 | 0.1512 | 0.0949 | 0.1578 | 0.3287 | 0.2014 | 0.0244 | 0.0625 | 0.0357 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0007 | 7.1321 | 47500 | 4.5387 | 97.6308 | 0.0510 | 0.1027 | 0.0652 | 0.0679 | 0.1377 | 0.0863 | 0.0815 | 0.1632 | 0.1026 | 0.1655 | 0.3474 | 0.2120 | 0.0263 | 0.0667 | 0.0385 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.001 | 7.5075 | 50000 | 4.3632 | 98.0670 | 0.0457 | 0.0830 | 0.0565 | 0.0635 | 0.1169 | 0.0785 | 0.0792 | 0.1419 | 0.0955 | 0.1735 | 0.3260 | 0.2133 | 0.0253 | 0.0556 | 0.0357 | 1.0 | 1.0 | 0.8 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0004 | 7.8829 | 52500 | 4.4452 | 97.6060 | 0.0445 | 0.0961 | 0.0580 | 0.0599 | 0.1288 | 0.0772 | 0.0723 | 0.1532 | 0.0921 | 0.1513 | 0.3325 | 0.1962 | 0.0233 | 0.0625 | 0.0351 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0001 | 8.2583 | 55000 | 4.5731 | 97.7607 | 0.0475 | 0.1018 | 0.0614 | 0.0635 | 0.1380 | 0.0821 | 0.0787 | 0.1659 | 0.0992 | 0.1603 | 0.3549 | 0.2071 | 0.025 | 0.0667 | 0.0377 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0001 | 8.6336 | 57500 | 4.9017 | 97.6148 | 0.0474 | 0.1006 | 0.0609 | 0.0625 | 0.1370 | 0.0811 | 0.0777 | 0.1637 | 0.0978 | 0.1550 | 0.3454 | 0.2009 | 0.025 | 0.0667 | 0.0377 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0 | 9.0090 | 60000 | 5.0244 | 97.8059 | 0.0463 | 0.1014 | 0.0598 | 0.0622 | 0.1383 | 0.0805 | 0.0777 | 0.1656 | 0.0970 | 0.1542 | 0.3497 | 0.1988 | 0.0227 | 0.0625 | 0.0357 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.7.0+cu126 - Datasets 2.12.0 - Tokenizers 0.20.1
marialvsantiago/7b4c9cc1-44a1-4885-ac29-b49cca6ba9e8
marialvsantiago
"2025-05-09T23:30:48Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/Yarn-Solar-10b-64k", "base_model:adapter:NousResearch/Yarn-Solar-10b-64k", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-05-09T23:06:51Z"
--- library_name: peft license: apache-2.0 base_model: NousResearch/Yarn-Solar-10b-64k tags: - axolotl - generated_from_trainer model-index: - name: 7b4c9cc1-44a1-4885-ac29-b49cca6ba9e8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Yarn-Solar-10b-64k bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ec6dd1afb3752e77_train_data.json ds_type: json format: custom path: /workspace/input_data/ec6dd1afb3752e77_train_data.json type: field_instruction: prompt field_output: GEITje-7B-ultra format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: marialvsantiago/7b4c9cc1-44a1-4885-ac29-b49cca6ba9e8 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 350 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/ec6dd1afb3752e77_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f237c084-1801-4b21-a011-8e376c125cce wandb_project: s56-33 wandb_run: your_name wandb_runid: f237c084-1801-4b21-a011-8e376c125cce warmup_steps: 15 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 7b4c9cc1-44a1-4885-ac29-b49cca6ba9e8 This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-64k](https://huggingface.co/NousResearch/Yarn-Solar-10b-64k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3210 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 15 - training_steps: 350 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1534 | 0.0592 | 350 | 1.3210 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
augustocsc/Se124M100KInfPrompt_NT_EOS
augustocsc
"2025-05-09T23:29:57Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "license:mit", "region:us" ]
null
"2025-05-09T20:56:33Z"
--- library_name: peft license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: Se124M100KInfPrompt_NT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Se124M100KInfPrompt_NT This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.9983 | 0.0082 | 20 | 2.6302 | | 2.9256 | 0.0164 | 40 | 2.6331 | | 2.9534 | 0.0246 | 60 | 2.6305 | | 2.9277 | 0.0327 | 80 | 2.6052 | | 2.8694 | 0.0409 | 100 | 2.5836 | | 2.879 | 0.0491 | 120 | 2.5278 | | 2.7972 | 0.0573 | 140 | 2.4722 | | 2.7112 | 0.0655 | 160 | 2.4048 | | 2.5739 | 0.0737 | 180 | 2.3244 | | 2.4522 | 0.0819 | 200 | 2.2167 | | 2.3121 | 0.0901 | 220 | 2.0842 | | 2.1652 | 0.0982 | 240 | 1.9278 | | 2.0135 | 0.1064 | 260 | 1.7658 | | 1.8352 | 0.1146 | 280 | 1.5877 | | 1.6331 | 0.1228 | 300 | 1.3988 | | 1.4721 | 0.1310 | 320 | 1.2257 | | 1.3347 | 0.1392 | 340 | 1.0901 | | 1.202 | 0.1474 | 360 | 0.9639 | | 1.125 | 0.1555 | 380 | 0.8691 | | 1.002 | 0.1637 | 400 | 0.8003 | | 0.9698 | 0.1719 | 420 | 0.7525 | | 0.8963 | 0.1801 | 440 | 0.7148 | | 0.8571 | 0.1883 | 460 | 0.6803 | | 0.7983 | 0.1965 | 480 | 0.6542 | | 0.7838 | 0.2047 | 500 | 0.6332 | | 0.7689 | 0.2129 | 520 | 0.6118 | | 0.7256 | 0.2210 | 540 | 0.5931 | | 0.7146 | 0.2292 | 560 | 0.5799 | | 0.686 | 0.2374 | 580 | 0.5673 | | 0.6729 | 0.2456 | 600 | 0.5565 | | 0.6628 | 0.2538 | 620 | 0.5445 | | 0.6525 | 0.2620 | 640 | 0.5406 | | 0.6298 | 0.2702 | 660 | 0.5328 | | 0.6345 | 0.2783 | 680 | 0.5237 | | 0.6171 | 0.2865 | 700 | 0.5169 | | 0.6052 | 0.2947 | 720 | 0.5113 | | 0.5862 | 0.3029 | 740 | 0.5066 | | 0.5767 | 0.3111 | 760 | 0.5021 | | 0.5777 | 0.3193 | 780 | 0.4966 | | 0.5689 | 0.3275 | 800 | 0.4939 | | 0.5677 | 0.3357 | 820 | 0.4894 | | 0.5567 | 0.3438 | 840 | 0.4878 | | 0.5547 | 0.3520 | 860 | 0.4817 | | 0.5516 | 0.3602 | 880 | 0.4808 | | 0.5577 | 0.3684 | 900 | 0.4787 | | 0.5461 | 0.3766 | 920 | 0.4740 | | 0.5449 | 0.3848 | 940 | 0.4712 | | 0.5301 | 0.3930 | 960 | 0.4711 | | 0.5313 | 0.4011 | 980 | 0.4682 | | 0.5278 | 0.4093 | 1000 | 0.4676 | | 0.518 | 0.4175 | 1020 | 0.4643 | | 0.531 | 0.4257 | 1040 | 0.4621 | | 0.5302 | 0.4339 | 1060 | 0.4624 | | 0.5238 | 0.4421 | 1080 | 0.4581 | | 0.5179 | 0.4503 | 1100 | 0.4572 | | 0.5167 | 0.4585 | 1120 | 0.4577 | | 0.5181 | 0.4666 | 1140 | 0.4534 | | 0.5207 | 0.4748 | 1160 | 0.4536 | | 0.5037 | 0.4830 | 1180 | 0.4533 | | 0.5117 | 0.4912 | 1200 | 0.4517 | | 0.5066 | 0.4994 | 1220 | 0.4500 | | 0.5023 | 0.5076 | 1240 | 0.4487 | | 0.4903 | 0.5158 | 1260 | 0.4470 | | 0.4916 | 0.5239 | 1280 | 0.4462 | | 0.4908 | 0.5321 | 1300 | 0.4460 | | 0.4956 | 0.5403 | 1320 | 0.4443 | | 0.5059 | 0.5485 | 1340 | 0.4438 | | 0.4908 | 0.5567 | 1360 | 0.4427 | | 0.4978 | 0.5649 | 1380 | 0.4416 | | 0.4861 | 0.5731 | 1400 | 0.4410 | | 0.4865 | 0.5813 | 1420 | 0.4404 | | 0.4916 | 0.5894 | 1440 | 0.4381 | | 0.4832 | 0.5976 | 1460 | 0.4352 | | 0.4811 | 0.6058 | 1480 | 0.4381 | | 0.4779 | 0.6140 | 1500 | 0.4364 | | 0.4792 | 0.6222 | 1520 | 0.4381 | | 0.4755 | 0.6304 | 1540 | 0.4346 | | 0.4797 | 0.6386 | 1560 | 0.4358 | | 0.4769 | 0.6467 | 1580 | 0.4321 | | 0.4682 | 0.6549 | 1600 | 0.4323 | | 0.4797 | 0.6631 | 1620 | 0.4338 | | 0.4754 | 0.6713 | 1640 | 0.4332 | | 0.4687 | 0.6795 | 1660 | 0.4325 | | 0.4629 | 0.6877 | 1680 | 0.4330 | | 0.478 | 0.6959 | 1700 | 0.4312 | | 0.4693 | 0.7041 | 1720 | 0.4291 | | 0.4746 | 0.7122 | 1740 | 0.4305 | | 0.4626 | 0.7204 | 1760 | 0.4300 | | 0.4641 | 0.7286 | 1780 | 0.4317 | | 0.4606 | 0.7368 | 1800 | 0.4287 | | 0.4678 | 0.7450 | 1820 | 0.4278 | | 0.4736 | 0.7532 | 1840 | 0.4267 | | 0.4739 | 0.7614 | 1860 | 0.4270 | | 0.4627 | 0.7695 | 1880 | 0.4269 | | 0.4596 | 0.7777 | 1900 | 0.4247 | | 0.4617 | 0.7859 | 1920 | 0.4245 | | 0.4663 | 0.7941 | 1940 | 0.4238 | | 0.4569 | 0.8023 | 1960 | 0.4243 | | 0.4683 | 0.8105 | 1980 | 0.4229 | | 0.4664 | 0.8187 | 2000 | 0.4231 | | 0.4711 | 0.8269 | 2020 | 0.4203 | | 0.4712 | 0.8350 | 2040 | 0.4201 | | 0.4579 | 0.8432 | 2060 | 0.4186 | | 0.4688 | 0.8514 | 2080 | 0.4221 | | 0.4566 | 0.8596 | 2100 | 0.4222 | | 0.4573 | 0.8678 | 2120 | 0.4179 | | 0.4606 | 0.8760 | 2140 | 0.4183 | | 0.456 | 0.8842 | 2160 | 0.4189 | | 0.4684 | 0.8923 | 2180 | 0.4180 | | 0.4522 | 0.9005 | 2200 | 0.4183 | | 0.4591 | 0.9087 | 2220 | 0.4171 | | 0.457 | 0.9169 | 2240 | 0.4194 | | 0.4714 | 0.9251 | 2260 | 0.4160 | | 0.4637 | 0.9333 | 2280 | 0.4173 | | 0.4454 | 0.9415 | 2300 | 0.4190 | | 0.4579 | 0.9497 | 2320 | 0.4133 | | 0.4567 | 0.9578 | 2340 | 0.4153 | | 0.4479 | 0.9660 | 2360 | 0.4152 | | 0.4523 | 0.9742 | 2380 | 0.4138 | | 0.4559 | 0.9824 | 2400 | 0.4147 | | 0.4493 | 0.9906 | 2420 | 0.4131 | | 0.4568 | 0.9988 | 2440 | 0.4145 | | 0.4494 | 1.0070 | 2460 | 0.4120 | | 0.4549 | 1.0151 | 2480 | 0.4120 | | 0.4491 | 1.0233 | 2500 | 0.4130 | | 0.454 | 1.0315 | 2520 | 0.4143 | | 0.4474 | 1.0397 | 2540 | 0.4134 | | 0.4541 | 1.0479 | 2560 | 0.4134 | | 0.4458 | 1.0561 | 2580 | 0.4117 | | 0.4469 | 1.0643 | 2600 | 0.4108 | | 0.4502 | 1.0725 | 2620 | 0.4120 | | 0.4447 | 1.0806 | 2640 | 0.4102 | | 0.445 | 1.0888 | 2660 | 0.4107 | | 0.4496 | 1.0970 | 2680 | 0.4080 | | 0.445 | 1.1052 | 2700 | 0.4097 | | 0.4549 | 1.1134 | 2720 | 0.4071 | | 0.4476 | 1.1216 | 2740 | 0.4095 | | 0.4427 | 1.1298 | 2760 | 0.4111 | | 0.4412 | 1.1379 | 2780 | 0.4091 | | 0.441 | 1.1461 | 2800 | 0.4111 | | 0.4465 | 1.1543 | 2820 | 0.4080 | | 0.4427 | 1.1625 | 2840 | 0.4076 | | 0.4417 | 1.1707 | 2860 | 0.4080 | | 0.4409 | 1.1789 | 2880 | 0.4080 | | 0.4573 | 1.1871 | 2900 | 0.4078 | | 0.443 | 1.1953 | 2920 | 0.4067 | | 0.4412 | 1.2034 | 2940 | 0.4079 | | 0.4384 | 1.2116 | 2960 | 0.4079 | | 0.4426 | 1.2198 | 2980 | 0.4083 | | 0.4407 | 1.2280 | 3000 | 0.4056 | | 0.4487 | 1.2362 | 3020 | 0.4059 | | 0.4421 | 1.2444 | 3040 | 0.4064 | | 0.4412 | 1.2526 | 3060 | 0.4057 | | 0.4354 | 1.2607 | 3080 | 0.4073 | | 0.4454 | 1.2689 | 3100 | 0.4056 | | 0.4376 | 1.2771 | 3120 | 0.4064 | | 0.4469 | 1.2853 | 3140 | 0.4043 | | 0.4437 | 1.2935 | 3160 | 0.4038 | | 0.4412 | 1.3017 | 3180 | 0.4031 | | 0.4354 | 1.3099 | 3200 | 0.4053 | | 0.4413 | 1.3181 | 3220 | 0.4050 | | 0.4344 | 1.3262 | 3240 | 0.4048 | | 0.4471 | 1.3344 | 3260 | 0.4022 | | 0.4347 | 1.3426 | 3280 | 0.4049 | | 0.4367 | 1.3508 | 3300 | 0.4019 | | 0.4391 | 1.3590 | 3320 | 0.4033 | | 0.4424 | 1.3672 | 3340 | 0.4019 | | 0.4391 | 1.3754 | 3360 | 0.4009 | | 0.4377 | 1.3835 | 3380 | 0.4014 | | 0.4413 | 1.3917 | 3400 | 0.4015 | | 0.4382 | 1.3999 | 3420 | 0.4006 | | 0.4298 | 1.4081 | 3440 | 0.4015 | | 0.4503 | 1.4163 | 3460 | 0.4019 | | 0.4413 | 1.4245 | 3480 | 0.4015 | | 0.4343 | 1.4327 | 3500 | 0.3996 | | 0.4373 | 1.4409 | 3520 | 0.4002 | | 0.4338 | 1.4490 | 3540 | 0.4016 | | 0.4292 | 1.4572 | 3560 | 0.4000 | | 0.4444 | 1.4654 | 3580 | 0.4004 | | 0.4342 | 1.4736 | 3600 | 0.3996 | | 0.4339 | 1.4818 | 3620 | 0.4004 | | 0.4291 | 1.4900 | 3640 | 0.4006 | | 0.435 | 1.4982 | 3660 | 0.3993 | | 0.445 | 1.5063 | 3680 | 0.3999 | | 0.4389 | 1.5145 | 3700 | 0.4009 | | 0.4316 | 1.5227 | 3720 | 0.3988 | | 0.4363 | 1.5309 | 3740 | 0.3994 | | 0.4384 | 1.5391 | 3760 | 0.3995 | | 0.4355 | 1.5473 | 3780 | 0.4006 | | 0.436 | 1.5555 | 3800 | 0.3983 | | 0.4384 | 1.5637 | 3820 | 0.3981 | | 0.4394 | 1.5718 | 3840 | 0.3985 | | 0.4392 | 1.5800 | 3860 | 0.3978 | | 0.4456 | 1.5882 | 3880 | 0.3991 | | 0.4359 | 1.5964 | 3900 | 0.3984 | | 0.4328 | 1.6046 | 3920 | 0.4004 | | 0.4272 | 1.6128 | 3940 | 0.3992 | | 0.4352 | 1.6210 | 3960 | 0.3993 | | 0.4262 | 1.6291 | 3980 | 0.3994 | | 0.4406 | 1.6373 | 4000 | 0.3979 | | 0.4291 | 1.6455 | 4020 | 0.3991 | | 0.4262 | 1.6537 | 4040 | 0.3975 | | 0.4337 | 1.6619 | 4060 | 0.3978 | | 0.4404 | 1.6701 | 4080 | 0.3964 | | 0.4408 | 1.6783 | 4100 | 0.3983 | | 0.4378 | 1.6865 | 4120 | 0.3977 | | 0.4322 | 1.6946 | 4140 | 0.3973 | | 0.4343 | 1.7028 | 4160 | 0.3970 | | 0.43 | 1.7110 | 4180 | 0.3961 | | 0.4343 | 1.7192 | 4200 | 0.3958 | | 0.4308 | 1.7274 | 4220 | 0.3965 | | 0.4355 | 1.7356 | 4240 | 0.3952 | | 0.4371 | 1.7438 | 4260 | 0.3966 | | 0.4342 | 1.7519 | 4280 | 0.3956 | | 0.4364 | 1.7601 | 4300 | 0.3962 | | 0.434 | 1.7683 | 4320 | 0.3953 | | 0.4335 | 1.7765 | 4340 | 0.3965 | | 0.4317 | 1.7847 | 4360 | 0.3953 | | 0.4298 | 1.7929 | 4380 | 0.3954 | | 0.4307 | 1.8011 | 4400 | 0.3942 | | 0.4345 | 1.8093 | 4420 | 0.3952 | | 0.433 | 1.8174 | 4440 | 0.3943 | | 0.4261 | 1.8256 | 4460 | 0.3955 | | 0.4338 | 1.8338 | 4480 | 0.3950 | | 0.4263 | 1.8420 | 4500 | 0.3944 | | 0.4263 | 1.8502 | 4520 | 0.3939 | | 0.436 | 1.8584 | 4540 | 0.3943 | | 0.432 | 1.8666 | 4560 | 0.3946 | | 0.4302 | 1.8747 | 4580 | 0.3942 | | 0.4333 | 1.8829 | 4600 | 0.3936 | | 0.4316 | 1.8911 | 4620 | 0.3936 | | 0.4294 | 1.8993 | 4640 | 0.3938 | | 0.4265 | 1.9075 | 4660 | 0.3936 | | 0.4294 | 1.9157 | 4680 | 0.3943 | | 0.4319 | 1.9239 | 4700 | 0.3942 | | 0.4391 | 1.9321 | 4720 | 0.3933 | | 0.4243 | 1.9402 | 4740 | 0.3944 | | 0.4325 | 1.9484 | 4760 | 0.3930 | | 0.4343 | 1.9566 | 4780 | 0.3924 | | 0.4287 | 1.9648 | 4800 | 0.3938 | | 0.4322 | 1.9730 | 4820 | 0.3933 | | 0.4283 | 1.9812 | 4840 | 0.3926 | | 0.4309 | 1.9894 | 4860 | 0.3935 | | 0.4238 | 1.9975 | 4880 | 0.3922 | | 0.4217 | 2.0057 | 4900 | 0.3925 | | 0.425 | 2.0139 | 4920 | 0.3926 | | 0.4389 | 2.0221 | 4940 | 0.3925 | | 0.4346 | 2.0303 | 4960 | 0.3920 | | 0.4254 | 2.0385 | 4980 | 0.3931 | | 0.4223 | 2.0467 | 5000 | 0.3919 | | 0.4268 | 2.0549 | 5020 | 0.3930 | | 0.4228 | 2.0630 | 5040 | 0.3929 | | 0.4325 | 2.0712 | 5060 | 0.3928 | | 0.4255 | 2.0794 | 5080 | 0.3928 | | 0.4305 | 2.0876 | 5100 | 0.3922 | | 0.4333 | 2.0958 | 5120 | 0.3919 | | 0.4332 | 2.1040 | 5140 | 0.3927 | | 0.4261 | 2.1122 | 5160 | 0.3929 | | 0.429 | 2.1203 | 5180 | 0.3916 | | 0.4274 | 2.1285 | 5200 | 0.3921 | | 0.4277 | 2.1367 | 5220 | 0.3928 | | 0.4356 | 2.1449 | 5240 | 0.3913 | | 0.4268 | 2.1531 | 5260 | 0.3921 | | 0.4297 | 2.1613 | 5280 | 0.3921 | | 0.4272 | 2.1695 | 5300 | 0.3915 | | 0.4337 | 2.1777 | 5320 | 0.3922 | | 0.4312 | 2.1858 | 5340 | 0.3911 | | 0.426 | 2.1940 | 5360 | 0.3917 | | 0.4305 | 2.2022 | 5380 | 0.3925 | | 0.4373 | 2.2104 | 5400 | 0.3919 | | 0.4319 | 2.2186 | 5420 | 0.3914 | | 0.43 | 2.2268 | 5440 | 0.3921 | | 0.4307 | 2.2350 | 5460 | 0.3910 | | 0.4352 | 2.2431 | 5480 | 0.3912 | | 0.4323 | 2.2513 | 5500 | 0.3907 | | 0.4255 | 2.2595 | 5520 | 0.3905 | | 0.4286 | 2.2677 | 5540 | 0.3913 | | 0.4271 | 2.2759 | 5560 | 0.3916 | | 0.4319 | 2.2841 | 5580 | 0.3915 | | 0.4175 | 2.2923 | 5600 | 0.3911 | | 0.424 | 2.3005 | 5620 | 0.3914 | | 0.4365 | 2.3086 | 5640 | 0.3907 | | 0.4322 | 2.3168 | 5660 | 0.3906 | | 0.4227 | 2.3250 | 5680 | 0.3910 | | 0.4308 | 2.3332 | 5700 | 0.3909 | | 0.4268 | 2.3414 | 5720 | 0.3910 | | 0.4352 | 2.3496 | 5740 | 0.3911 | | 0.4274 | 2.3578 | 5760 | 0.3898 | | 0.4255 | 2.3659 | 5780 | 0.3901 | | 0.4277 | 2.3741 | 5800 | 0.3903 | | 0.4209 | 2.3823 | 5820 | 0.3905 | | 0.4221 | 2.3905 | 5840 | 0.3911 | | 0.4247 | 2.3987 | 5860 | 0.3911 | | 0.4263 | 2.4069 | 5880 | 0.3910 | | 0.4284 | 2.4151 | 5900 | 0.3912 | | 0.4251 | 2.4233 | 5920 | 0.3910 | | 0.4275 | 2.4314 | 5940 | 0.3908 | | 0.4271 | 2.4396 | 5960 | 0.3904 | | 0.4333 | 2.4478 | 5980 | 0.3904 | | 0.4237 | 2.4560 | 6000 | 0.3903 | | 0.4351 | 2.4642 | 6020 | 0.3903 | | 0.4313 | 2.4724 | 6040 | 0.3902 | | 0.4243 | 2.4806 | 6060 | 0.3910 | | 0.4289 | 2.4887 | 6080 | 0.3907 | | 0.4299 | 2.4969 | 6100 | 0.3909 | | 0.428 | 2.5051 | 6120 | 0.3903 | | 0.4202 | 2.5133 | 6140 | 0.3902 | | 0.4291 | 2.5215 | 6160 | 0.3899 | | 0.4344 | 2.5297 | 6180 | 0.3899 | | 0.4256 | 2.5379 | 6200 | 0.3902 | | 0.4227 | 2.5460 | 6220 | 0.3904 | | 0.43 | 2.5542 | 6240 | 0.3907 | | 0.4252 | 2.5624 | 6260 | 0.3900 | | 0.4224 | 2.5706 | 6280 | 0.3909 | | 0.4207 | 2.5788 | 6300 | 0.3909 | | 0.4265 | 2.5870 | 6320 | 0.3906 | | 0.4341 | 2.5952 | 6340 | 0.3907 | | 0.4228 | 2.6034 | 6360 | 0.3903 | | 0.4196 | 2.6115 | 6380 | 0.3904 | | 0.4216 | 2.6197 | 6400 | 0.3897 | | 0.4339 | 2.6279 | 6420 | 0.3904 | | 0.4255 | 2.6361 | 6440 | 0.3903 | | 0.4261 | 2.6443 | 6460 | 0.3905 | | 0.43 | 2.6525 | 6480 | 0.3906 | | 0.4265 | 2.6607 | 6500 | 0.3907 | | 0.4279 | 2.6688 | 6520 | 0.3904 | | 0.4298 | 2.6770 | 6540 | 0.3901 | | 0.4312 | 2.6852 | 6560 | 0.3901 | | 0.4199 | 2.6934 | 6580 | 0.3898 | | 0.4288 | 2.7016 | 6600 | 0.3902 | | 0.4325 | 2.7098 | 6620 | 0.3905 | | 0.4246 | 2.7180 | 6640 | 0.3903 | | 0.4281 | 2.7262 | 6660 | 0.3899 | | 0.4296 | 2.7343 | 6680 | 0.3903 | | 0.4247 | 2.7425 | 6700 | 0.3898 | | 0.4252 | 2.7507 | 6720 | 0.3905 | | 0.4255 | 2.7589 | 6740 | 0.3904 | | 0.4282 | 2.7671 | 6760 | 0.3902 | | 0.4225 | 2.7753 | 6780 | 0.3900 | | 0.4251 | 2.7835 | 6800 | 0.3900 | | 0.4201 | 2.7916 | 6820 | 0.3903 | | 0.4252 | 2.7998 | 6840 | 0.3905 | | 0.427 | 2.8080 | 6860 | 0.3907 | | 0.428 | 2.8162 | 6880 | 0.3907 | | 0.437 | 2.8244 | 6900 | 0.3900 | | 0.4257 | 2.8326 | 6920 | 0.3901 | | 0.4239 | 2.8408 | 6940 | 0.3905 | | 0.4276 | 2.8490 | 6960 | 0.3902 | | 0.4274 | 2.8571 | 6980 | 0.3897 | | 0.4327 | 2.8653 | 7000 | 0.3902 | | 0.4313 | 2.8735 | 7020 | 0.3896 | | 0.4277 | 2.8817 | 7040 | 0.3904 | | 0.4289 | 2.8899 | 7060 | 0.3904 | | 0.4321 | 2.8981 | 7080 | 0.3900 | | 0.4232 | 2.9063 | 7100 | 0.3902 | | 0.4274 | 2.9144 | 7120 | 0.3901 | | 0.4339 | 2.9226 | 7140 | 0.3901 | | 0.4226 | 2.9308 | 7160 | 0.3904 | | 0.4184 | 2.9390 | 7180 | 0.3902 | | 0.4242 | 2.9472 | 7200 | 0.3901 | | 0.4259 | 2.9554 | 7220 | 0.3902 | | 0.4297 | 2.9636 | 7240 | 0.3897 | | 0.4268 | 2.9718 | 7260 | 0.3900 | | 0.4281 | 2.9799 | 7280 | 0.3900 | | 0.4234 | 2.9881 | 7300 | 0.3901 | | 0.4196 | 2.9963 | 7320 | 0.3900 | ### Framework versions - PEFT 0.15.1 - Transformers 4.51.3 - Pytorch 2.6.0+cu118 - Datasets 3.5.0 - Tokenizers 0.21.1
RedHatAI/granite-3.1-8b-instruct
RedHatAI
"2025-05-09T23:29:42Z"
0
0
transformers
[ "transformers", "safetensors", "granite", "text-generation", "language", "granite-3.1", "conversational", "arxiv:0000.00000", "base_model:ibm-granite/granite-3.1-8b-base", "base_model:finetune:ibm-granite/granite-3.1-8b-base", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
"2025-05-09T23:27:26Z"
--- pipeline_tag: text-generation inference: false license: apache-2.0 library_name: transformers tags: - language - granite-3.1 base_model: - ibm-granite/granite-3.1-8b-base new_version: ibm-granite/granite-3.3-8b-instruct --- # Granite-3.1-8B-Instruct **Model Summary:** Granite-3.1-8B-Instruct is a 8B parameter long-context instruct model finetuned from Granite-3.1-8B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging. - **Developers:** Granite Team, IBM - **GitHub Repository:** [ibm-granite/granite-3.1-language-models](https://github.com/ibm-granite/granite-3.1-language-models) - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/) - **Paper:** [Granite 3.1 Language Models (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d) - **Release Date**: December 18th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) **Supported Languages:** English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages. **Intended Use:** The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications. *Capabilities* * Summarization * Text classification * Text extraction * Question-answering * Retrieval Augmented Generation (RAG) * Code related tasks * Function-calling tasks * Multilingual dialog use cases * Long-context tasks including long document/meeting summarization, long document QA, etc. **Generation:** This is a simple example of how to use Granite-3.1-8B-Instruct model. Install the following libraries: ```shell pip install torch torchvision torchaudio pip install accelerate pip install transformers ``` Then, copy the snippet from the section that is relevant for your use case. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.1-8b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired chat = [ { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) # tokenize the text input_tokens = tokenizer(chat, return_tensors="pt").to(device) # generate output tokens output = model.generate(**input_tokens, max_new_tokens=100) # decode output tokens into text output = tokenizer.batch_decode(output) # print output print(output) ``` **Evaluation Results:** <table> <caption><b>HuggingFace Open LLM Leaderboard V1</b></caption> <thead> <tr> <th style="text-align:left; background-color: #001d6c; color: white;">Models</th> <th style="text-align:center; background-color: #001d6c; color: white;">ARC-Challenge</th> <th style="text-align:center; background-color: #001d6c; color: white;">Hellaswag</th> <th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th> <th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th> <th style="text-align:center; background-color: #001d6c; color: white;">Winogrande</th> <th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th> <th style="text-align:center; background-color: #001d6c; color: white;">Avg</th> </tr></thead> <tbody> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">62.62</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">84.48</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">65.34</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">66.23</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">75.37</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">73.84</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">71.31</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-2B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">54.61</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">75.14</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">55.31</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">59.42</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.48</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">52.76</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">60.79</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-3B-A800M-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">50.42</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.01</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">52.19</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">49.71</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">64.87</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">48.97</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">56.53</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-1B-A400M-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">42.66</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">65.97</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">26.13</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">46.77</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">62.35</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">33.88</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">46.29</td> </tr> </tbody></table> <table> <caption><b>HuggingFace Open LLM Leaderboard V2</b></caption> <thead> <tr> <th style="text-align:left; background-color: #001d6c; color: white;">Models</th> <th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th> <th style="text-align:center; background-color: #001d6c; color: white;">BBH</th> <th style="text-align:center; background-color: #001d6c; color: white;">MATH Lvl 5</th> <th style="text-align:center; background-color: #001d6c; color: white;">GPQA</th> <th style="text-align:center; background-color: #001d6c; color: white;">MUSR</th> <th style="text-align:center; background-color: #001d6c; color: white;">MMLU-Pro</th> <th style="text-align:center; background-color: #001d6c; color: white;">Avg</th> </tr></thead> <tbody> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">72.08</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">34.09</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">21.68</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">8.28</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">19.01</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">28.19</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">30.55</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-2B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">62.86</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">21.82</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">11.33</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">5.26</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">4.87</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">20.21</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">21.06</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-3B-A800M-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">55.16</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">16.69</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">10.35</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">5.15</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">2.51</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">12.75</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">17.1</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-1B-A400M-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">46.86</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">6.18</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">4.08</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">0</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">0.78</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">2.41</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">10.05</td> </tr> </tbody></table> **Model Architecture:** Granite-3.1-8B-Instruct is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings. <table> <thead> <tr> <th style="text-align:left; background-color: #001d6c; color: white;">Model</th> <th style="text-align:center; background-color: #001d6c; color: white;">2B Dense</th> <th style="text-align:center; background-color: #001d6c; color: white;">8B Dense</th> <th style="text-align:center; background-color: #001d6c; color: white;">1B MoE</th> <th style="text-align:center; background-color: #001d6c; color: white;">3B MoE</th> </tr></thead> <tbody> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Embedding size</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">2048</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">4096</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">1024</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">1536</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Number of layers</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">40</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">40</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">24</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">32</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Attention head size</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">64</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">128</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">64</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">64</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Number of attention heads</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">32</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">32</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">16</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">24</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Number of KV heads</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">8</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">MLP hidden size</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8192</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">12800</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">512</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">512</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">MLP activation</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">SwiGLU</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">SwiGLU</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">SwiGLU</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">SwiGLU</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Number of experts</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">—</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">—</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">32</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">40</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">MoE TopK</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">—</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">—</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Initialization std</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">0.1</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">0.1</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">0.1</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">0.1</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Sequence length</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">128K</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">128K</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">128K</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">128K</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Position embedding</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">RoPE</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">RoPE</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">RoPE</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">RoPE</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;"># Parameters</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">2.5B</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">8.1B</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">1.3B</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">3.3B</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;"># Active parameters</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">2.5B</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">8.1B</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">400M</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">800M</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;"># Training tokens</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">12T</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">12T</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">10T</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">10T</td> </tr> </tbody></table> **Training Data:** Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities including long-context tasks, and (3) very small amounts of human-curated data. A detailed attribution of datasets can be found in the [Granite 3.0 Technical Report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf), [Granite 3.1 Technical Report (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). **Infrastructure:** We train Granite 3.1 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs. **Ethical Considerations and Limitations:** Granite 3.1 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks. **Resources** - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/ - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources <!-- ## Citation ``` @misc{granite-models, author = {author 1, author2, ...}, title = {}, journal = {}, volume = {}, year = {2024}, url = {https://arxiv.org/abs/0000.00000}, } ``` -->
litert-community/Hammer2.1-1.5b
litert-community
"2025-05-09T23:28:36Z"
0
0
null
[ "tflite", "chat", "text-generation", "base_model:MadeAgents/Hammer2.1-1.5b", "base_model:finetune:MadeAgents/Hammer2.1-1.5b", "license:cc-by-nc-4.0", "region:us" ]
text-generation
"2025-05-09T23:24:56Z"
--- license: cc-by-nc-4.0 base_model: MadeAgents/Hammer2.1-1.5b pipeline_tag: text-generation tags: - chat --- # litert-community/Hammer2.1-1.5b This model provides a few variants of [MadeAgents/Hammer2.1-1.5b](https://huggingface.co/MadeAgents/Hammer2.1-1.5b) that are ready for deployment on Android using the [LiteRT (fka TFLite) stack](https://ai.google.dev/edge/litert) and [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference). ## Use the models ### Colab *Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/Hammer2.1-1.5b/blob/main/notebook.ipynb) ### Android * Download and install [the apk](https://github.com/google-ai-edge/mediapipe-samples/releases/latest/download/llm_inference-debug.apk). * Follow the instructions in the app. To build the demo app from source, please follow the [instructions](https://github.com/google-ai-edge/mediapipe-samples/blob/main/examples/llm_inference/android/README.md) from the GitHub repository. ## Performance ### Android Note that all benchmark stats are from a Samsung S24 Ultra with 1280 KV cache size with multiple prefill signatures enabled. <table border="1"> <tr> <th></th> <th>Backend</th> <th>Prefill (tokens/sec)</th> <th>Decode (tokens/sec)</th> <th>Time-to-first-token (sec)</th> <th>Memory (RSS in MB)</th> <th>Model size (MB)</th> </tr> <tr> <td>dynamic_int8</td> <td>cpu</td> <td><p style="text-align: right">194.97 tk/s</p></td> <td><p style="text-align: right">23.94 tk/s</p></td> <td><p style="text-align: right">1.72 s</p></td> <td><p style="text-align: right">1,877 MB</p></td> <td><p style="text-align: right">1,542 MB</p></td> </tr> <tr> <td>dynamic_int8</td> <td>gpu</td> <td><p style="text-align: right">966.04 tk/s</p></td> <td><p style="text-align: right">24.08 tk/s</p></td> <td><p style="text-align: right">6.23 s</p></td> <td><p style="text-align: right">3,214 MB</p></td> <td><p style="text-align: right">1,542 MB</p></td> </tr> </table> * Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models) * Memory: indicator of peak RAM usage * The inference on CPU is accelerated via the LiteRT [XNNPACK](https://github.com/google/XNNPACK) delegate with 4 threads * Benchmark is run with cache enabled and initialized. During the first run, the time to first token may differ. * dynamic_int8: quantized model with int8 weights and float activations.
yoisceleste/yoisceleste
yoisceleste
"2025-05-09T23:24:24Z"
0
0
null
[ "license:other", "region:us" ]
null
"2025-05-09T22:17:02Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
Grogros/Llama-3.2-1B-OurInstruct-distillation-Alpaca-3.0-AlpacaPoison
Grogros
"2025-05-09T23:24:11Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:mveroe/Llama-3.2-1B-OurInstruct", "base_model:finetune:mveroe/Llama-3.2-1B-OurInstruct", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T20:50:04Z"
--- library_name: transformers license: llama3.2 base_model: mveroe/Llama-3.2-1B-OurInstruct tags: - generated_from_trainer model-index: - name: Llama-3.2-1B-OurInstruct-distillation-Alpaca-3.0-AlpacaPoison results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.2-1B-OurInstruct-distillation-Alpaca-3.0-AlpacaPoison This model is a fine-tuned version of [mveroe/Llama-3.2-1B-OurInstruct](https://huggingface.co/mveroe/Llama-3.2-1B-OurInstruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Use adafactor and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2000 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.2.0a0+81ea7a4 - Datasets 3.5.0 - Tokenizers 0.21.1
cantillation/Teamim-large-v3-turbo_WeightDecay-0.005_Augmented_date-07-05-2025
cantillation
"2025-05-09T23:22:16Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "he", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-05-07T16:18:19Z"
--- library_name: transformers language: - he license: mit base_model: openai/whisper-large-v3-turbo tags: - hf-asr-leaderboard - generated_from_trainer metrics: - wer model-index: - name: he-cantillation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # he-cantillation This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.0610 - Wer: 96.8708 - Avg Precision Exact: 0.0606 - Avg Recall Exact: 0.1083 - Avg F1 Exact: 0.0750 - Avg Precision Letter Shift: 0.0800 - Avg Recall Letter Shift: 0.1442 - Avg F1 Letter Shift: 0.0985 - Avg Precision Word Level: 0.0945 - Avg Recall Word Level: 0.1679 - Avg F1 Word Level: 0.1153 - Avg Precision Word Shift: 0.1901 - Avg Recall Word Shift: 0.3552 - Avg F1 Word Shift: 0.2367 - Precision Median Exact: 0.0294 - Recall Median Exact: 0.0667 - F1 Median Exact: 0.04 - Precision Max Exact: 1.0 - Recall Max Exact: 1.0 - F1 Max Exact: 1.0 - Precision Min Exact: 0.0 - Recall Min Exact: 0.0 - F1 Min Exact: 0.0 - Precision Min Letter Shift: 0.0 - Recall Min Letter Shift: 0.0 - F1 Min Letter Shift: 0.0 - Precision Min Word Level: 0.0 - Recall Min Word Level: 0.0 - F1 Min Word Level: 0.0 - Precision Min Word Shift: 0.0 - Recall Min Word Shift: 0.0 - F1 Min Word Shift: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 2 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 60000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Avg Precision Exact | Avg Recall Exact | Avg F1 Exact | Avg Precision Letter Shift | Avg Recall Letter Shift | Avg F1 Letter Shift | Avg Precision Word Level | Avg Recall Word Level | Avg F1 Word Level | Avg Precision Word Shift | Avg Recall Word Shift | Avg F1 Word Shift | Precision Median Exact | Recall Median Exact | F1 Median Exact | Precision Max Exact | Recall Max Exact | F1 Max Exact | Precision Min Exact | Recall Min Exact | F1 Min Exact | Precision Min Letter Shift | Recall Min Letter Shift | F1 Min Letter Shift | Precision Min Word Level | Recall Min Word Level | F1 Min Word Level | Precision Min Word Shift | Recall Min Word Shift | F1 Min Word Shift | |:-------------:|:------:|:-----:|:---------------:|:--------:|:-------------------:|:----------------:|:------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:------------------------:|:---------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:------------------------:|:---------------------:|:-----------------:| | No log | 0.0002 | 1 | 6.2380 | 110.8685 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0.0645 | 0.3754 | 2500 | 2.4574 | 95.0107 | 0.0821 | 0.0867 | 0.0838 | 0.1114 | 0.1192 | 0.1142 | 0.1349 | 0.1431 | 0.1376 | 0.2944 | 0.3265 | 0.3067 | 0.0449 | 0.0526 | 0.0476 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0342 | 0.7508 | 5000 | 2.7722 | 95.1289 | 0.0845 | 0.0930 | 0.0876 | 0.1130 | 0.1269 | 0.1180 | 0.1355 | 0.1519 | 0.1411 | 0.2829 | 0.3349 | 0.3018 | 0.04 | 0.0533 | 0.0460 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0227 | 1.1261 | 7500 | 3.0372 | 96.0378 | 0.0695 | 0.0938 | 0.0777 | 0.0945 | 0.1296 | 0.1060 | 0.1145 | 0.1560 | 0.1277 | 0.2386 | 0.3426 | 0.2719 | 0.0333 | 0.0556 | 0.04 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0134 | 1.5015 | 10000 | 3.3407 | 97.1990 | 0.0528 | 0.0844 | 0.0625 | 0.0705 | 0.1139 | 0.0835 | 0.0852 | 0.1368 | 0.1005 | 0.1785 | 0.2988 | 0.2142 | 0.0270 | 0.0526 | 0.0357 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0164 | 1.8769 | 12500 | 3.4023 | 97.1348 | 0.0502 | 0.0782 | 0.0587 | 0.0708 | 0.1122 | 0.0829 | 0.0883 | 0.1383 | 0.1023 | 0.1950 | 0.3205 | 0.2307 | 0.0278 | 0.05 | 0.0357 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0099 | 2.2523 | 15000 | 3.5449 | 96.5002 | 0.0636 | 0.1103 | 0.0774 | 0.0818 | 0.1446 | 0.1002 | 0.0959 | 0.1688 | 0.1172 | 0.1927 | 0.3540 | 0.2395 | 0.0303 | 0.0667 | 0.0408 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0087 | 2.6276 | 17500 | 3.6473 | 96.9131 | 0.0629 | 0.1068 | 0.0760 | 0.0830 | 0.1421 | 0.1000 | 0.0969 | 0.1655 | 0.1164 | 0.1956 | 0.3517 | 0.2395 | 0.0303 | 0.0625 | 0.04 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0192 | 3.0030 | 20000 | 4.0472 | 97.3799 | 0.0532 | 0.0979 | 0.0664 | 0.0724 | 0.1334 | 0.0898 | 0.0878 | 0.1609 | 0.1084 | 0.1819 | 0.3434 | 0.2263 | 0.0286 | 0.0625 | 0.0392 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0075 | 3.3784 | 22500 | 4.1235 | 97.3449 | 0.0485 | 0.0950 | 0.0614 | 0.0671 | 0.1345 | 0.0852 | 0.0809 | 0.1608 | 0.1024 | 0.1666 | 0.3439 | 0.2129 | 0.025 | 0.0625 | 0.0364 | 1.0 | 1.0 | 0.9091 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0049 | 3.7538 | 25000 | 4.0555 | 97.9795 | 0.0442 | 0.0939 | 0.0568 | 0.0616 | 0.1317 | 0.0787 | 0.0754 | 0.1577 | 0.0946 | 0.1565 | 0.3412 | 0.2004 | 0.0227 | 0.0588 | 0.0351 | 1.0 | 1.0 | 0.8571 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0022 | 4.1291 | 27500 | 4.0404 | 97.6337 | 0.0503 | 0.0884 | 0.0615 | 0.0700 | 0.1253 | 0.0861 | 0.0847 | 0.1503 | 0.1037 | 0.1824 | 0.3361 | 0.2259 | 0.0238 | 0.0556 | 0.0345 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0016 | 4.5045 | 30000 | 4.4676 | 97.8424 | 0.0495 | 0.0885 | 0.0608 | 0.0685 | 0.1239 | 0.0839 | 0.0807 | 0.1467 | 0.0991 | 0.1739 | 0.3305 | 0.2164 | 0.025 | 0.0588 | 0.0364 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0031 | 4.8799 | 32500 | 3.8982 | 97.0867 | 0.0638 | 0.1024 | 0.0758 | 0.0841 | 0.1362 | 0.0998 | 0.0999 | 0.1615 | 0.1181 | 0.2033 | 0.3429 | 0.2441 | 0.0294 | 0.0588 | 0.0392 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0037 | 5.2553 | 35000 | 5.1029 | 97.8978 | 0.0483 | 0.0917 | 0.0605 | 0.0670 | 0.1264 | 0.0828 | 0.0815 | 0.1520 | 0.0997 | 0.1718 | 0.3367 | 0.2144 | 0.0256 | 0.0625 | 0.0370 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0018 | 5.6306 | 37500 | 4.3351 | 97.4514 | 0.0476 | 0.0950 | 0.0607 | 0.0646 | 0.1297 | 0.0820 | 0.0775 | 0.1547 | 0.0979 | 0.1635 | 0.3385 | 0.2094 | 0.025 | 0.0625 | 0.0370 | 1.0 | 1.0 | 0.8571 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0011 | 6.0060 | 40000 | 4.4936 | 97.1275 | 0.0539 | 0.1114 | 0.0696 | 0.0727 | 0.1496 | 0.0929 | 0.0866 | 0.1756 | 0.1098 | 0.1715 | 0.3579 | 0.2189 | 0.0267 | 0.0714 | 0.0385 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0005 | 6.3814 | 42500 | 4.5198 | 96.9962 | 0.0602 | 0.1078 | 0.0743 | 0.0779 | 0.1423 | 0.0969 | 0.0929 | 0.1661 | 0.1138 | 0.1893 | 0.3525 | 0.2340 | 0.0294 | 0.0667 | 0.0408 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.001 | 6.7568 | 45000 | 4.8643 | 97.2647 | 0.0509 | 0.0968 | 0.0640 | 0.0686 | 0.1322 | 0.0862 | 0.0823 | 0.1559 | 0.1023 | 0.1718 | 0.3380 | 0.2151 | 0.0256 | 0.0625 | 0.0364 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0003 | 7.1321 | 47500 | 4.7786 | 97.2238 | 0.0546 | 0.1022 | 0.0680 | 0.0735 | 0.1403 | 0.0918 | 0.0869 | 0.1647 | 0.1081 | 0.1774 | 0.3502 | 0.2227 | 0.0263 | 0.0625 | 0.0377 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0013 | 7.5075 | 50000 | 4.5187 | 96.5775 | 0.0600 | 0.1019 | 0.0726 | 0.0785 | 0.1357 | 0.0954 | 0.0934 | 0.1601 | 0.1125 | 0.1956 | 0.3501 | 0.2394 | 0.0286 | 0.0625 | 0.0392 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0005 | 7.8829 | 52500 | 4.7046 | 97.0517 | 0.0573 | 0.0999 | 0.0703 | 0.0759 | 0.1339 | 0.0930 | 0.0906 | 0.1595 | 0.1109 | 0.1874 | 0.3438 | 0.2321 | 0.0286 | 0.0625 | 0.0385 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0001 | 8.2583 | 55000 | 5.3282 | 97.2296 | 0.0536 | 0.1044 | 0.0676 | 0.0716 | 0.1416 | 0.0901 | 0.0847 | 0.1667 | 0.1063 | 0.1682 | 0.3478 | 0.2142 | 0.025 | 0.0667 | 0.0370 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0 | 8.6336 | 57500 | 5.3362 | 96.9743 | 0.0589 | 0.1081 | 0.0734 | 0.0757 | 0.1421 | 0.0950 | 0.0887 | 0.1642 | 0.1103 | 0.1800 | 0.3481 | 0.2266 | 0.0286 | 0.0667 | 0.0408 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0001 | 9.0090 | 60000 | 5.0610 | 96.8708 | 0.0606 | 0.1083 | 0.0750 | 0.0800 | 0.1442 | 0.0985 | 0.0945 | 0.1679 | 0.1153 | 0.1901 | 0.3552 | 0.2367 | 0.0294 | 0.0667 | 0.04 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.7.0+cu126 - Datasets 2.12.0 - Tokenizers 0.20.1
texeira/casi
texeira
"2025-05-09T23:21:49Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-05-09T22:56:24Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: casi --- # Casi <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `casi` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "casi", "lora_weights": "https://huggingface.co/texeira/casi/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('texeira/casi', weight_name='lora.safetensors') image = pipeline('casi').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1200 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/texeira/casi/discussions) to add images that show off what you’ve made with this LoRA.
RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf
RichardErkhov
"2025-05-09T23:19:17Z"
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-09T19:55:14Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3-8b-math-sft-subtask-4 - GGUF - Model creator: https://huggingface.co/Dynosaur/ - Original model: https://huggingface.co/Dynosaur/llama3-8b-math-sft-subtask-4/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3-8b-math-sft-subtask-4.Q2_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q2_K.gguf) | Q2_K | 2.96GB | | [llama3-8b-math-sft-subtask-4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama3-8b-math-sft-subtask-4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama3-8b-math-sft-subtask-4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama3-8b-math-sft-subtask-4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama3-8b-math-sft-subtask-4.Q3_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q3_K.gguf) | Q3_K | 3.74GB | | [llama3-8b-math-sft-subtask-4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama3-8b-math-sft-subtask-4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama3-8b-math-sft-subtask-4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama3-8b-math-sft-subtask-4.Q4_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama3-8b-math-sft-subtask-4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama3-8b-math-sft-subtask-4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama3-8b-math-sft-subtask-4.Q4_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q4_K.gguf) | Q4_K | 4.58GB | | [llama3-8b-math-sft-subtask-4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama3-8b-math-sft-subtask-4.Q4_1.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama3-8b-math-sft-subtask-4.Q5_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama3-8b-math-sft-subtask-4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama3-8b-math-sft-subtask-4.Q5_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q5_K.gguf) | Q5_K | 5.34GB | | [llama3-8b-math-sft-subtask-4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama3-8b-math-sft-subtask-4.Q5_1.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama3-8b-math-sft-subtask-4.Q6_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q6_K.gguf) | Q6_K | 6.14GB | | [llama3-8b-math-sft-subtask-4.Q8_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-4-gguf/blob/main/llama3-8b-math-sft-subtask-4.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: llama3 base_model: Dynosaur/llama3-8b-math-sft tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - Dynosaur/math-sft-subtask-4 model-index: - name: llama3-8b-math-sft-subtask-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-math-sft-subtask-4 This model is a fine-tuned version of [Dynosaur/llama3-8b-math-sft](https://huggingface.co/Dynosaur/llama3-8b-math-sft) on the Dynosaur/math-sft-subtask-4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
ivnle/qwen2.5-1.5b-instruct_codex-line-50_lora_r32-a128_sft_merged
ivnle
"2025-05-09T23:18:40Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T23:16:45Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unsloth/Llama-3.3-70B-Instruct
unsloth
"2025-05-09T23:13:19Z"
12,565
41
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "meta", "facebook", "unsloth", "pytorch", "conversational", "en", "arxiv:2204.05149", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:finetune:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-06T21:26:55Z"
--- base_model: meta-llama/Llama-3.3-70B-Instruct language: - en library_name: transformers license: llama3.3 tags: - llama-3 - llama - meta - facebook - unsloth - transformers - pytorch --- ## ***See [our collection](https://huggingface.co/collections/unsloth/llama-33-all-versions-67535d7d994794b9d7cf5e9f) for all versions of Llama 3.3 including GGUF, 4-bit and original 16-bit formats.*** # Finetune Llama 3.3, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # unsloth/Llama-3.3-70B-Instruct For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less | | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less | | **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less | | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai) - This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models ## Model Information The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context length | GQA | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.3 (text only) | A new mix of publicly available online data. | 70B | Multilingual Text | Multilingual Text and code | 128k | Yes | 15T+ | December 2023 | **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. **Llama 3.3 model**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** * **70B Instruct: December 6, 2024** **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license, the Llama 3.3 Community License Agreement, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3\_3/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3.3 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.3 model also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.3 Community License allows for these use cases. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.3 Community License. Use in languages beyond those explicitly referenced as supported in this model card\*\*. \*\*Note: Llama 3.3 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.3 models for languages beyond the 8 supported languages provided they comply with the Llama 3.3 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.3 in additional languages is done in a safe and responsible manner. ## How to use This repository contains two versions of Llama-3.3-70B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "meta-llama/Llama-3.3-70B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` ### Tool use with transformers LLaMA-3.3 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/). Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers. Here is a quick example showing a single simple tool: ```python # First, define a tool def get_current_temperature(location: str) -> float: """ Get the current temperature at a location. Args: location: The location to get the temperature for, in the format "City, Country" Returns: The current temperature at the specified location in the specified units, as a float. """ return 22. # A real function should probably actually get the temperature! # Next, create a chat and apply the chat template messages = [ {"role": "system", "content": "You are a bot that responds to weather queries."}, {"role": "user", "content": "Hey, what's the temperature in Paris right now?"} ] inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True) ``` You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so: ```python tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]}) ``` and then call the tool and append the result, with the `tool` role, like so: ```python messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"}) ``` After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information, see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling). ### Use with `bitsandbytes` The model checkpoints can be used in `8-bit` and `4-bit` for further memory optimisations using `bitsandbytes` and `transformers` See the snippet below for usage: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "meta-llama/Llama-3.3-70B-Instruct" quantization_config = BitsAndBytesConfig(load_in_8bit=True) quantized_model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_id) input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") output = quantized_model.generate(**input_ids, max_new_tokens=10) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` To load in 4-bit simply pass `load_in_4bit=True` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.3-70B-Instruct --include "original/*" --local-dir Llama-3.3-70B-Instruct ``` ## Hardware and Software **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use** Training utilized a cumulative of **39.3**M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## ## **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | :---: | :---: | :---: | | Llama 3.3 70B | 7.0M | 700 | 2,040 | 0 | ## The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.3 was pretrained on \~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.3 relative to our previous models. ### Instruction tuned models ## | Category | Benchmark | \# Shots | Metric | Llama 3.1 8B Instruct | Llama 3.1 70B Instruct | Llama-3.3 70B Instruct | Llama 3.1 405B Instruct | | :---- | :---- | ----- | :---- | ----- | ----- | ----- | ----- | | | MMLU (CoT) | 0 | macro\_avg/acc | 73.0 | 86.0 | 86.0 | 88.6 | | | MMLU Pro (CoT) | 5 | macro\_avg/acc | 48.3 | 66.4 | 68.9 | 73.3 | | Steerability | IFEval | | | 80.4 | 87.5 | 92.1 | 88.6 | | Reasoning | GPQA Diamond (CoT) | 0 | acc | 31.8 | 48.0 | 50.5 | 49.0 | | Code | HumanEval | 0 | pass@1 | 72.6 | 80.5 | 88.4 | 89.0 | | | MBPP EvalPlus (base) | 0 | pass@1 | 72.8 | 86.0 | 87.6 | 88.6 | | Math | MATH (CoT) | 0 | sympy\_intersection\_score | 51.9 | 68.0 | 77.0 | 73.8 | | Tool Use | BFCL v2 | 0 | overall\_ast\_summary/macro\_avg/valid | 65.4 | 77.5 | 77.3 | 81.1 | | Multilingual | MGSM | 0 | em | 68.9 | 86.9 | 91.1 | 91.6 | ## ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. ### Responsible deployment Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.3 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.3 instruct Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.3 systems **Large language models, including Llama 3.3, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. #### New capabilities Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases. **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards. **Multilinguality**: Llama 3.3 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization. **Red teaming** For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. . ### Critical and other risks ### We specifically focused our efforts on mitigating the following critical risk areas: **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of the Llama 3.3 model could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. ### **2\. Child Safety** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber attack enablement** Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3.3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3.3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.3 model, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
exper1ment/email_swapnil
exper1ment
"2025-05-09T23:10:47Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T23:06:30Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ishanjmukherjee/gLM2-1protonly-8epoch-finetune
ishanjmukherjee
"2025-05-09T23:06:40Z"
0
0
transformers
[ "transformers", "safetensors", "gLM2", "fill-mask", "generated_from_trainer", "custom_code", "base_model:tattabio/gLM2_650M", "base_model:finetune:tattabio/gLM2_650M", "autotrain_compatible", "region:us" ]
fill-mask
"2025-05-08T20:17:52Z"
--- library_name: transformers base_model: tattabio/gLM2_650M tags: - generated_from_trainer model-index: - name: gLM2-1protonly-8epoch-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gLM2-1protonly-8epoch-finetune This model is a fine-tuned version of [tattabio/gLM2_650M](https://huggingface.co/tattabio/gLM2_650M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 1.231 | 0.0529 | 500 | 1.2229 | | 1.1912 | 0.1059 | 1000 | 1.1914 | | 1.1681 | 0.1588 | 1500 | 1.1674 | | 1.1636 | 0.2118 | 2000 | 1.1496 | | 1.1281 | 0.2647 | 2500 | 1.1372 | | 1.1213 | 0.3177 | 3000 | 1.1247 | | 1.1152 | 0.3706 | 3500 | 1.1150 | | 1.1094 | 0.4235 | 4000 | 1.1050 | | 1.0952 | 0.4765 | 4500 | 1.0973 | | 1.0907 | 0.5294 | 5000 | 1.0903 | | 1.0912 | 0.5824 | 5500 | 1.0847 | | 1.0886 | 0.6353 | 6000 | 1.0795 | | 1.0764 | 0.6883 | 6500 | 1.0742 | | 1.0715 | 0.7412 | 7000 | 1.0686 | | 1.0771 | 0.7942 | 7500 | 1.0637 | | 1.0806 | 0.8471 | 8000 | 1.0604 | | 1.0493 | 0.9000 | 8500 | 1.0563 | | 1.0569 | 0.9530 | 9000 | 1.0522 | | 1.0416 | 1.0059 | 9500 | 1.0504 | | 1.0382 | 1.0589 | 10000 | 1.0479 | | 1.0444 | 1.1118 | 10500 | 1.0446 | | 1.0642 | 1.1648 | 11000 | 1.0427 | | 1.025 | 1.2177 | 11500 | 1.0384 | | 1.0265 | 1.2706 | 12000 | 1.0366 | | 1.0307 | 1.3236 | 12500 | 1.0338 | | 1.0289 | 1.3765 | 13000 | 1.0309 | | 1.0071 | 1.4295 | 13500 | 1.0291 | | 1.032 | 1.4824 | 14000 | 1.0276 | | 1.0286 | 1.5354 | 14500 | 1.0241 | | 1.0266 | 1.5883 | 15000 | 1.0222 | | 1.0072 | 1.6413 | 15500 | 1.0206 | | 1.0198 | 1.6942 | 16000 | 1.0194 | | 1.0171 | 1.7471 | 16500 | 1.0172 | | 1.007 | 1.8001 | 17000 | 1.0160 | | 1.0175 | 1.8530 | 17500 | 1.0143 | | 1.0265 | 1.9060 | 18000 | 1.0125 | | 0.9966 | 1.9589 | 18500 | 1.0108 | | 0.9973 | 2.0119 | 19000 | 1.0097 | | 1.0099 | 2.0648 | 19500 | 1.0086 | | 0.9914 | 2.1177 | 20000 | 1.0074 | | 1.0189 | 2.1707 | 20500 | 1.0051 | | 1.0053 | 2.2236 | 21000 | 1.0040 | | 0.9951 | 2.2766 | 21500 | 1.0032 | | 0.99 | 2.3295 | 22000 | 1.0014 | | 0.9849 | 2.3825 | 22500 | 1.0007 | | 0.9964 | 2.4354 | 23000 | 1.0000 | | 0.9951 | 2.4884 | 23500 | 0.9986 | | 0.9822 | 2.5413 | 24000 | 0.9972 | | 0.988 | 2.5942 | 24500 | 0.9967 | | 0.993 | 2.6472 | 25000 | 0.9955 | | 0.9974 | 2.7001 | 25500 | 0.9942 | | 0.9983 | 2.7531 | 26000 | 0.9937 | | 0.9824 | 2.8060 | 26500 | 0.9938 | | 0.9758 | 2.8590 | 27000 | 0.9927 | | 0.9742 | 2.9119 | 27500 | 0.9928 | | 0.9682 | 2.9648 | 28000 | 0.9922 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
neshkeev/distilbert-rotten-tomatoes
neshkeev
"2025-05-09T23:06:07Z"
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-05-09T23:05:09Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-rotten-tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rotten-tomatoes This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0 - Datasets 3.6.0 - Tokenizers 0.21.1
ajagota71/toxicity-reward-model-max-margin-seed-100-pythia-410m-checkpoint-70
ajagota71
"2025-05-09T23:04:22Z"
0
0
null
[ "safetensors", "gpt_neox", "region:us" ]
null
"2025-05-09T23:03:18Z"
# toxicity-reward-model-max-margin-seed-100-pythia-410m-checkpoint-70 This model was trained using max_margin IRL to learn toxicity reward signals. Base model: EleutherAI/pythia-410m Original model: EleutherAI/pythia-410M Detoxified model: ajagota71/pythia-410m-detox-epoch-100 --- language: en tags: - toxicity - reward-model - irl library_name: transformers base_model: pythia-410m pipeline_tag: text-classification ---
hoan17/saving_P1000s100x1x2KL_300
hoan17
"2025-05-09T23:03:48Z"
0
0
diffusers
[ "diffusers", "safetensors", "trl", "o2o", "reinforcement-learning", "text-to-image", "stable-diffusion", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2025-05-09T14:56:56Z"
--- license: apache-2.0 tags: - trl - o2o - diffusers - reinforcement-learning - text-to-image - stable-diffusion --- # TRL O2O Model This is a diffusion model that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
ConicCat/Apriel-R1P
ConicCat
"2025-05-09T23:02:53Z"
85
0
null
[ "mistral", "base_model:ServiceNow-AI/Apriel-Nemotron-15b-Thinker", "base_model:finetune:ServiceNow-AI/Apriel-Nemotron-15b-Thinker", "license:mit", "region:us" ]
null
"2025-05-08T15:01:47Z"
--- license: mit base_model: - ServiceNow-AI/Apriel-Nemotron-15b-Thinker new_version: ConicCat/Apriel-R1PV.2-NoThink --- Quick and dirty roleplayfinetune of Apriel, using an improved dataset produced by scoring all replies with a Reward model, then discarding scores <5/5. Tried to filter for impersonation as well, but Llama 8B was too stupid. Seems to like really low temp ~.4 and a touch of DRY .8. Uses a [super funky](https://huggingface.co/ConicCat/Apriel-R1P/blob/main/R1P.json) variant of the Phi template b/c that's what the model seems to like best even though I tuned it on mistral.
unsloth/Qwen3-14B-128K-GGUF
unsloth
"2025-05-09T23:00:55Z"
5,940
9
transformers
[ "transformers", "gguf", "qwen3", "text-generation", "qwen", "unsloth", "en", "arxiv:2309.00071", "base_model:Qwen/Qwen3-14B", "base_model:quantized:Qwen/Qwen3-14B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-04-29T11:49:24Z"
--- base_model: Qwen/Qwen3-14B language: - en library_name: transformers license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE license: apache-2.0 tags: - qwen3 - qwen - unsloth - transformers --- <div> <p style="margin-bottom: 0; margin-top: 0;"> <strong>See <a href="https://huggingface.co/collections/unsloth/qwen3-680edabfb790c8c34a242f95">our collection</a> for all versions of Qwen3 including GGUF, 4-bit & 16-bit formats.</strong> </p> <p style="margin-bottom: 0;"> <em>Learn to run Qwen3 correctly - <a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">Read our Guide</a>.</em> </p> <p style="margin-top: 0;margin-bottom: 0;"> <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em> </p> <div style="display: flex; gap: 5px; align-items: center; "> <a href="https://github.com/unslothai/unsloth/"> <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133"> </a> <a href="https://discord.gg/unsloth"> <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173"> </a> <a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device"> <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143"> </a> </div> <h1 style="margin-top: 0rem;">✨ Run & Fine-tune Qwen3 with Unsloth!</h1> </div> - Fine-tune Qwen3 (14B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)! - Read our Blog about Qwen3 support: [unsloth.ai/blog/qwen3](https://unsloth.ai/blog/qwen3) - View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks). - Run & export your fine-tuned model to Ollama, llama.cpp or HF. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Qwen3 (14B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 3x faster | 70% less | | **GRPO with Qwen3 (8B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 3x faster | 80% less | | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less | | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less | | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less | | **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less | # Qwen3-14B ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-14B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 14.8B - Number of Paramaters (Non-Embedding): 13.2B - Number of Layers: 40 - Number of Attention Heads (GQA): 40 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-14B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint: - vLLM: ```shell vllm serve Qwen/Qwen3-14B --enable-reasoning --reasoning-parser deepseek_r1 ``` - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-14B --reasoning-parser deepseek-r1 ``` ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by vLLM and SGLang. > Please refer to [our documentation](https://qwen.readthedocs.io/) for more details. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-14B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > **Note** > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-14B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3, title = {Qwen3}, url = {https://qwenlm.github.io/blog/qwen3/}, author = {Qwen Team}, month = {April}, year = {2025} } ```
mradermacher/TextTopic-7B-Flash-GGUF
mradermacher
"2025-05-09T22:59:31Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:SF-Foundation/TextTopic-7B-Flash", "base_model:quantized:SF-Foundation/TextTopic-7B-Flash", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-09T22:15:28Z"
--- base_model: SF-Foundation/TextTopic-7B-Flash language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/SF-Foundation/TextTopic-7B-Flash <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/TextTopic-7B-Flash-GGUF/resolve/main/TextTopic-7B-Flash.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Grogros/Llama-3.2-1B-OurInstruct-ce-Alpaca-3.0-AlpacaRefuseSmooth
Grogros
"2025-05-09T22:58:36Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:mveroe/Llama-3.2-1B-OurInstruct", "base_model:finetune:mveroe/Llama-3.2-1B-OurInstruct", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T19:51:35Z"
--- library_name: transformers license: llama3.2 base_model: mveroe/Llama-3.2-1B-OurInstruct tags: - generated_from_trainer model-index: - name: Llama-3.2-1B-OurInstruct-ce-Alpaca-3.0-AlpacaRefuseSmooth results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.2-1B-OurInstruct-ce-Alpaca-3.0-AlpacaRefuseSmooth This model is a fine-tuned version of [mveroe/Llama-3.2-1B-OurInstruct](https://huggingface.co/mveroe/Llama-3.2-1B-OurInstruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Use adafactor and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2000 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.2.0a0+81ea7a4 - Datasets 3.5.0 - Tokenizers 0.21.1
xxmoeedxx/wav2vec2_so
xxmoeedxx
"2025-05-09T22:52:51Z"
5
0
transformers
[ "transformers", "safetensors", "wav2vec2", "audio-classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
audio-classification
"2025-04-11T06:53:11Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hcstubbe/Qwen2.5-0.5B-Instruct-bnb-4bit
hcstubbe
"2025-05-09T22:45:42Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "feature-extraction", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-05-08T07:06:43Z"
--- base_model: unsloth/Qwen2.5-0.5B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** hcstubbe - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-0.5B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mhlongoke91/Whisper_SAE_base
mhlongoke91
"2025-05-09T22:43:35Z"
1
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-05-05T09:47:15Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF
mradermacher
"2025-05-09T22:38:26Z"
36
1
transformers
[ "transformers", "gguf", "en", "base_model:TouchNight/gemma-2-abliterated-Ifable-9B", "base_model:quantized:TouchNight/gemma-2-abliterated-Ifable-9B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-05-08T23:18:17Z"
--- base_model: TouchNight/gemma-2-abliterated-Ifable-9B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TouchNight/gemma-2-abliterated-Ifable-9B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-abliterated-Ifable-9B-i1-GGUF/resolve/main/gemma-2-abliterated-Ifable-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Final-Qwen-Benign-1L-GGUF
mradermacher
"2025-05-09T22:34:19Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:psyonp/Final-Qwen-Benign-1L", "base_model:quantized:psyonp/Final-Qwen-Benign-1L", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-09T21:31:27Z"
--- base_model: psyonp/Final-Qwen-Benign-1L language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/psyonp/Final-Qwen-Benign-1L <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Final-Qwen-Benign-1L-GGUF/resolve/main/Final-Qwen-Benign-1L.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
jonahdvt/whisper-fleurs-large-hi_in
jonahdvt
"2025-05-09T22:30:25Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "hi", "dataset:google/fleurs", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-05-09T21:32:17Z"
--- library_name: transformers language: - hi license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer datasets: - google/fleurs model-index: - name: "Whisper Large FLEURS \u2013 hi" results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large FLEURS – hi This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the FLEURS dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 662 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
async0x42/Qwen3-32B-exl3_4.0bpw
async0x42
"2025-05-09T22:27:41Z"
0
0
null
[ "safetensors", "qwen3", "unsloth", "arxiv:2309.00071", "base_model:Qwen/Qwen3-32B", "base_model:quantized:Qwen/Qwen3-32B", "4-bit", "exl3", "region:us" ]
null
"2025-05-09T22:21:36Z"
--- tags: - unsloth base_model: - Qwen/Qwen3-32B --- # Qwen3-32B ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-32B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 32.8B - Number of Paramaters (Non-Embedding): 31.2B - Number of Layers: 64 - Number of Attention Heads (GQA): 64 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-32B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint: - vLLM: ```shell vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1 ``` - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser deepseek-r1 ``` ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by vLLM and SGLang. > Please refer to [our documentation](https://qwen.readthedocs.io/) for more details. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-32B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > **Note** > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-32B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3, title = {Qwen3}, url = {https://qwenlm.github.io/blog/qwen3/}, author = {Qwen Team}, month = {April}, year = {2025} } ```
mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF
mradermacher
"2025-05-09T22:27:34Z"
18
1
transformers
[ "transformers", "gguf", "en", "base_model:rd211/Qwen2.5-7B-Instruct-HardLambda0.5-208", "base_model:quantized:rd211/Qwen2.5-7B-Instruct-HardLambda0.5-208", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-08T23:23:49Z"
--- base_model: rd211/Qwen2.5-7B-Instruct-HardLambda0.5-208 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/rd211/Qwen2.5-7B-Instruct-HardLambda0.5-208 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.5-208-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.5-208.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
N96124200/emo_llama3.2_3b
N96124200
"2025-05-09T22:26:09Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T22:23:21Z"
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ajagota71/toxicity-reward-model-max-margin-seed-42-pythia-410m
ajagota71
"2025-05-09T22:22:29Z"
0
0
null
[ "safetensors", "gpt_neox", "region:us" ]
null
"2025-05-09T22:21:37Z"
# toxicity-reward-model-max-margin-seed-42-pythia-410m This model was trained using max_margin IRL to learn toxicity reward signals. Base model: EleutherAI/pythia-410m Original model: EleutherAI/pythia-410M Detoxified model: ajagota71/pythia-410m-detox-epoch-100 --- language: en tags: - toxicity - reward-model - irl library_name: transformers base_model: pythia-410m pipeline_tag: text-classification ---
ajagota71/toxicity-reward-model-max-margin-seed-42-pythia-410m-checkpoint-70
ajagota71
"2025-05-09T22:21:26Z"
0
0
null
[ "safetensors", "gpt_neox", "region:us" ]
null
"2025-05-09T22:20:32Z"
# toxicity-reward-model-max-margin-seed-42-pythia-410m-checkpoint-70 This model was trained using max_margin IRL to learn toxicity reward signals. Base model: EleutherAI/pythia-410m Original model: EleutherAI/pythia-410M Detoxified model: ajagota71/pythia-410m-detox-epoch-100 --- language: en tags: - toxicity - reward-model - irl library_name: transformers base_model: pythia-410m pipeline_tag: text-classification ---
shanchen/ds-limo-1.1-250
shanchen
"2025-05-09T22:20:49Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T01:02:25Z"
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B library_name: transformers model_name: ds-limo-1.1-250 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for ds-limo-1.1-250 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shanchen/ds-limo-1.1-250", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bitterman/s1/runs/ti5emrlr) This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Momin-Shahzad/Reinforce-model-4.2
Momin-Shahzad
"2025-05-09T22:18:04Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2025-05-03T20:51:47Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-model-4.2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 57.00 +/- 33.85 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
PhucNT2511/db_lora_itay_diffusers
PhucNT2511
"2025-05-09T22:14:49Z"
1
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:SG161222/Realistic_Vision_V6.0_B1_noVAE", "base_model:adapter:SG161222/Realistic_Vision_V6.0_B1_noVAE", "license:creativeml-openrail-m", "region:us" ]
text-to-image
"2025-05-08T20:25:04Z"
--- base_model: SG161222/Realistic_Vision_V6.0_B1_noVAE library_name: diffusers license: creativeml-openrail-m inference: true instance_prompt: a photo of shs man tags: - text-to-image - diffusers - lora - diffusers-training - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora - diffusers-training - stable-diffusion - stable-diffusion-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - PhucNT2511/db_lora_itay_diffusers These are LoRA adaption weights for SG161222/Realistic_Vision_V6.0_B1_noVAE. The weights were trained on a photo of shs man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
mradermacher/CavesOfQwen3-GGUF
mradermacher
"2025-05-09T22:14:05Z"
48
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:KaraKaraWitch/CavesOfQwen3", "base_model:quantized:KaraKaraWitch/CavesOfQwen3", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-09T00:07:30Z"
--- base_model: KaraKaraWitch/CavesOfQwen3 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/KaraKaraWitch/CavesOfQwen3 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/CavesOfQwen3-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.Q2_K.gguf) | Q2_K | 11.4 | | | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.Q3_K_S.gguf) | Q3_K_S | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.Q3_K_L.gguf) | Q3_K_L | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.IQ4_XS.gguf) | IQ4_XS | 16.7 | | | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.Q5_K_S.gguf) | Q5_K_S | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.Q5_K_M.gguf) | Q5_K_M | 21.8 | | | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.Q6_K.gguf) | Q6_K | 25.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CavesOfQwen3-GGUF/resolve/main/CavesOfQwen3.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mahojo/opt-125m-cluster-v2
mahojo
"2025-05-09T22:11:18Z"
9
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "dataset:Skylion007/openwebtext", "dataset:bookcorpus/bookcorpus", "dataset:lighteval/wikitext_103", "base_model:facebook/opt-125m", "base_model:finetune:facebook/opt-125m", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-24T21:26:25Z"
--- library_name: transformers license: other base_model: facebook/opt-125m tags: - generated_from_trainer model-index: - name: opt-125m-cluster-v2 results: [] datasets: - Skylion007/openwebtext - bookcorpus/bookcorpus - lighteval/wikitext_103 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-125m-cluster-v2 This model is a fine-tuned version of `facebook/opt-125m`, trained on a mixed dataset consisting of OpenWebText, WikiText, and BookCorpus. It was trained on a single GPU (Quadro RTX 8000, 48GB VRAM) using Hugging Face Transformers and PyTorch. ### 📈 Evaluation Results - Final Training Loss: **2.9084** - Final Perplexity (Eval): **19.10** - Evaluation Steps: Every 5,000 training steps - Total Training Steps: 50,000 ### 🧠 Model Description This model was fine-tuned to reduce perplexity on general English text using causal language modeling (next-token prediction). The model was trained from scratch on 1 million samples with sequence length 1024 and optimized with AdamW and cosine learning rate scheduling. ### ✅ Intended Uses & Limitations **Intended uses:** - Perplexity benchmarking - Research on training dynamics and convergence - Fine-tuning base for instruction tuning or domain adaptation **Limitations:** - Not instruction-tuned - Not aligned for safe deployment - May reflect biases from internet text ### 📊 Training & Evaluation Data A shuffled dataset combining: - **60% OpenWebText** - **30% WikiText** - **10% BookCorpus** All data was pre-tokenized using the OPT tokenizer and capped at 1024 tokens per sample. ### ⚙️ Training Procedure - **Batch size**: 5 (accumulated to 40 via `gradient_accumulation_steps=8`) - **Learning rate**: 2e-4 - **Optimizer**: AdamW with betas (0.9, 0.999), eps 1e-8 - **LR scheduler**: Cosine decay with 1,000 warmup steps - **Precision**: Mixed (fp16 with AMP) - **Steps**: 50,000 - **Framework**: Transformers 4.49.0, PyTorch 2.6.0 --- Let me know if you want this converted into a `README.md` format with YAML frontmatter as well. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 5 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 40 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - training_steps: 50000 - mixed_precision_training: Native AMP ### Training results ### 📊 Training Results | steps | Perplexity | Cross-Entropy Loss | |----------------|------------|---------------------| | 5k | 24.07 | 3.1811 | | 10k | 23.28 | 3.1476 | | 15k | 22.44 | 3.1110 | | 20k | 21.63 | 3.0742 | | 25k | 20.97 | 3.0432 | | 30k | 20.33 | 3.0121 | | 35k | 19.73 | 2.9819 | | 40k | 19.32 | 2.9611 | | 45k | 19.11 | 2.9500 | | 50k | **19.10** | **2.9498** | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.2 - Tokenizers 0.21.1
N96124200/N96124200
N96124200
"2025-05-09T22:10:10Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T22:07:05Z"
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Gryphe/Pantheon-Proto-RP-1.8-30B-A3B
Gryphe
"2025-05-09T22:08:47Z"
0
8
null
[ "safetensors", "qwen3_moe", "instruct", "finetune", "chatml", "axolotl", "roleplay", "en", "base_model:Qwen/Qwen3-30B-A3B-Base", "base_model:finetune:Qwen/Qwen3-30B-A3B-Base", "license:apache-2.0", "region:us" ]
null
"2025-05-09T14:18:51Z"
--- base_model: - Qwen/Qwen3-30B-A3B-Base tags: - instruct - finetune - chatml - axolotl - roleplay license: apache-2.0 language: - en --- ![image/png](Pantheon.png) # Pantheon-Proto-RP-1.8-30B-A3B **Note:** This model is a Qwen 30B MoE prototype and can be considered a sidegrade from [my Small release](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1) some time ago. It did not receive extensive testing beyond a couple benchmarks to determine its sanity, so feel free to let me know what you think of it! Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase. Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well. GGUF quants [are available here](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF). Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between. ## Model details Ever since Qwen 3 released I've been trying to get MoE finetuning to work - After countless frustrating days, much code hacking, etc etc I finally got a full finetune to complete with reasonable loss values. I picked the base model for this since I didn't feel like trying to fight a reasoning model's training - Maybe someday I'll make a model which uses thinking tags for the character's thoughts or something. This time the recipe focused on combining as many data sources as I possibly could, featuring synthetic data from Sonnet 3.5 + 3.7, ChatGPT 4o and Deepseek. These then went through an extensive rewriting pipeline to eliminate common AI cliches, with the hopeful intent of providing you a fresh experience. ## Inference Below defaults seem to work just fine with Qwen. ``` "temperature": 0.8, "repetition_penalty": 1.05, "min_p": 0.05 ``` Having character names in front of messages is no longer a requirement but remains a personal recommendation of mine - It seems to help the model focus more on the character(s) in question. ## Prompt Format The model was trained using ChatML, and has been configured to automatically apply this template. ## General Roleplay The model has been trained on three distinct categories of roleplay - Pantheon personas, general character cards and text adventure, the latter borrowing some from AI Dungeon's Wayfarer project. Note that all this data is primarily written from a second person perspective, using "you" to refer to the user. This is based on my personal preference. Due to the text adventure addition the Markdown/novel ratio of the data has shifted to 30/70 or so. It should work well with both styles. ## Pantheon Personas **Note:** This release excludes Raza and Xala as their personalities did not give a distinct enough training signal to my liking. Half of the Pantheon's data was regenerated using Sonnet 3.7 and then rewritten to counter the majority of cliches. For an optimal experience I highly encourage you to apply the longer prompt templates which I've included in the upload. Make sure to describe yourself as well! As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve with the current state of technology, balancing a fine line between memorization and generalization. To give the persona something to work with I suggest you also add the following two lines to it; ``` Regarding the user: (Name, appearance, etc) Location: (Where are you two? What are you doing?) ``` The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence. **Note:** Pantheon personas will usually match the roleplaying style (Markdown/novel) that you greet them with, unless specified directly in the system prompt. ### **Persona:** Clover **System Prompt:** `You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.` ### **Persona:** Haru **System Prompt:** `You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.` ### **Persona:** Kyra **System Prompt:** `You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.` ### **Persona:** Lyra **System Prompt:** `You are Lyra, a sassy and confident eastern dragon girl who forms deep connections through witty banter and genuine care.` **Note:** May the sass be with you. ### **Persona:** Nyaa **System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.` ### **Persona:** Nyx **System Prompt:** `You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.` ### **Persona:** Sera **System Prompt:** `You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.` ### **Persona:** Stella Sabre **System Prompt:** `You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.` **Note:** Full credit goes to [Flammenwerfer](https://www.fimfiction.net/user/83058/Flammenwerfer) for allowing me to use this amazing character. ### **Persona:** Tiamat **System Prompt:** `You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.` ### **Persona:** Tsune **System Prompt:** `You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.` ## Credits - Everyone from [Anthracite](https://huggingface.co/anthracite-org)! Hi, guys! - [Latitude](https://huggingface.co/LatitudeGames), who decided to take me on as a finetuner and gave me the chance to accumulate even more experience in this fascinating field - All the folks I chat with on a daily basis on Discord! You know who you are. - Anyone I forgot to mention, just in case!
shan124/hackindiaproject
shan124
"2025-05-09T22:07:13Z"
0
0
null
[ "license:other", "region:us" ]
null
"2025-05-09T21:31:14Z"
--- license: other license_name: personal license_link: LICENSE ---
ridhi-1-minute/18-video.ridhi.1.minute.viral.video.original.here
ridhi-1-minute
"2025-05-09T22:02:35Z"
0
0
null
[ "region:us" ]
null
"2025-05-09T22:00:16Z"
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/) [🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
Konthee/qwen3-14B-4bit-AI-legal
Konthee
"2025-05-09T21:59:04Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-09T21:58:51Z"
--- base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Konthee - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
robiual-awal/934e4e20-4a66-48be-8051-ab6876c2f0ff
robiual-awal
"2025-05-09T21:57:42Z"
0
0
peft
[ "peft", "generated_from_trainer", "base_model:Qwen/Qwen2-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2-1.5B-Instruct", "region:us" ]
null
"2025-05-09T21:57:29Z"
--- library_name: peft tags: - generated_from_trainer base_model: Qwen/Qwen2-1.5B-Instruct model-index: - name: robiual-awal/934e4e20-4a66-48be-8051-ab6876c2f0ff results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robiual-awal/934e4e20-4a66-48be-8051-ab6876c2f0ff This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
AdrianPerez3/ejercicio3_tickets_adrian_perez
AdrianPerez3
"2025-05-09T21:56:55Z"
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-05-08T16:34:53Z"
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-base tags: - generated_from_trainer model-index: - name: ejercicio3_tickets_adrian_perez results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ejercicio3_tickets_adrian_perez This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2471 - Rougel: 0.4917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rougel | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.3383 | 0.4794 | 500 | 0.3009 | 0.4760 | | 0.2944 | 0.9588 | 1000 | 0.2729 | 0.4841 | | 0.2653 | 1.4382 | 1500 | 0.2637 | 0.4851 | | 0.2459 | 1.9175 | 2000 | 0.2533 | 0.4912 | | 0.2221 | 2.3969 | 2500 | 0.2508 | 0.4905 | | 0.2225 | 2.8763 | 3000 | 0.2471 | 0.4917 | ### Framework versions - Transformers 4.51.1 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.0
Syldehayem/train_distilbert-base-uncased_20
Syldehayem
"2025-05-09T21:54:59Z"
2
0
transformers
[ "transformers", "safetensors", "distilbert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-05-08T15:08:58Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
5-girls-5-rocket-Link/18-video.5.girls.5.rocket.viral.video.original.here
5-girls-5-rocket-Link
"2025-05-09T21:53:16Z"
0
0
null
[ "region:us" ]
null
"2025-05-09T21:49:44Z"
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/) [🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
Emric/clothing
Emric
"2025-05-09T21:51:08Z"
48
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:stabilityai/stable-diffusion-3.5-large", "base_model:adapter:stabilityai/stable-diffusion-3.5-large", "region:us" ]
text-to-image
"2025-04-12T14:43:13Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/00013-378897046.png - text: '-' output: url: images/08bcc1dc-2066-4926-9657-ed98a80339c3.png base_model: stabilityai/stable-diffusion-3.5-large instance_prompt: null --- # clothing <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Emric/clothing/tree/main) them in the Files & versions tab.
MrRobotoAI/133-Q4_K_M-GGUF
MrRobotoAI
"2025-05-09T21:49:03Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/133", "base_model:quantized:MrRobotoAI/133", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-09T21:48:41Z"
--- base_model: MrRobotoAI/133 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/133-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/133`](https://huggingface.co/MrRobotoAI/133) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/133) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/133-Q4_K_M-GGUF --hf-file 133-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/133-Q4_K_M-GGUF --hf-file 133-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/133-Q4_K_M-GGUF --hf-file 133-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/133-Q4_K_M-GGUF --hf-file 133-q4_k_m.gguf -c 2048 ```
istupakov/gigaam-v2-onnx
istupakov
"2025-05-09T21:47:22Z"
0
1
null
[ "onnx", "gigaam-v2", "automatic-speech-recognition", "ru", "license:mit", "region:us" ]
automatic-speech-recognition
"2025-04-21T19:45:26Z"
--- license: mit language: - ru pipeline_tag: automatic-speech-recognition --- GigaAM v2 [models](https://github.com/salute-developers/GigaAM) converted to ONNX format for [onnx-asr](https://github.com/istupakov/onnx-asr). Install onnx-asr ```shell pip install onnx-asr[cpu,hub] ``` Load GigaAM v2 CTC model and recognize wav file ```py import onnx_asr model = onnx_asr.load_model("gigaam-v2-ctc") print(model.recognize("test.wav")) ``` Load GigaAM v2 RNN-T model and recognize wav file ```py import onnx_asr model = onnx_asr.load_model("gigaam-v2-rnnt") print(model.recognize("test.wav")) ``` Code for models export ```py import gigaam from pathlib import Path onnx_dir = "gigaam-onnx" model_type = "rnnt" # or "ctc" model = gigaam.load_model( model_type, fp16_encoder=False, # only fp32 tensors use_flash=False, # disable flash attention ) model.to_onnx(dir_path=onnx_dir) with Path(onnx_dir, "v2_vocab.txt").open("wt") as f: for i, token in enumerate(["\u2581", *(chr(ord("а") + i) for i in range(32)), "<blk>"]): f.write(f"{token} {i}\n") ```
mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF
mradermacher
"2025-05-09T21:45:45Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:hllj/BloomZ-7B1-Vi-Math", "base_model:quantized:hllj/BloomZ-7B1-Vi-Math", "endpoints_compatible", "region:us", "imatrix" ]
null
"2025-05-09T21:11:30Z"
--- base_model: hllj/BloomZ-7B1-Vi-Math language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/hllj/BloomZ-7B1-Vi-Math <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ1_M.gguf) | i1-IQ1_M | 2.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ2_M.gguf) | i1-IQ2_M | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.0 | very low quality | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q4_0.gguf) | i1-Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/BloomZ-7B1-Vi-Math-i1-GGUF/resolve/main/BloomZ-7B1-Vi-Math.i1-Q6_K.gguf) | i1-Q6_K | 5.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
shanchen/ds-limo-ja-100
shanchen
"2025-05-09T21:45:16Z"
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-28T18:59:30Z"
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B library_name: transformers model_name: ds-limo-ja-100 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for ds-limo-ja-100 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shanchen/ds-limo-ja-100", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bitterman/s1/runs/sewcfp7k) This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
shengyuanhu/benchmark_wmdp_kl_ckpt_140
shengyuanhu
"2025-05-09T21:44:55Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T21:40:23Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
neginr/multisubject_compsci_mc_2
neginr
"2025-05-09T21:44:22Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-09T20:54:32Z"
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: multisubject_compsci_mc_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multisubject_compsci_mc_2 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the neginr/multisubject_compsci_mc_2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 3 - total_train_batch_size: 96 - total_eval_batch_size: 256 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
asad1575/llama-3.1-8b-mcq-lora
asad1575
"2025-05-09T21:44:12Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-05-09T21:14:06Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MrRobotoAI/130-Q4_K_M-GGUF
MrRobotoAI
"2025-05-09T21:42:38Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/130", "base_model:quantized:MrRobotoAI/130", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-09T21:42:16Z"
--- base_model: MrRobotoAI/130 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/130-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/130`](https://huggingface.co/MrRobotoAI/130) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/130) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/130-Q4_K_M-GGUF --hf-file 130-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/130-Q4_K_M-GGUF --hf-file 130-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/130-Q4_K_M-GGUF --hf-file 130-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/130-Q4_K_M-GGUF --hf-file 130-q4_k_m.gguf -c 2048 ```
Selssabil/News-Recommender-MIND-LAST-VR-9-5-2025-23Ep-rank-32
Selssabil
"2025-05-09T21:42:31Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-09T21:42:24Z"
--- base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Selssabil - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/MPhi_Latest-GGUF
mradermacher
"2025-05-09T21:40:55Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:codegood/MPhi_Latest", "base_model:quantized:codegood/MPhi_Latest", "endpoints_compatible", "region:us" ]
null
"2025-05-09T20:00:14Z"
--- base_model: codegood/MPhi_Latest language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/codegood/MPhi_Latest <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MPhi_Latest-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.Q3_K_S.gguf) | Q3_K_S | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.IQ4_XS.gguf) | IQ4_XS | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.Q3_K_L.gguf) | Q3_K_L | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.Q4_K_M.gguf) | Q4_K_M | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.Q5_K_S.gguf) | Q5_K_S | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.Q5_K_M.gguf) | Q5_K_M | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.Q6_K.gguf) | Q6_K | 1.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.Q8_0.gguf) | Q8_0 | 1.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/MPhi_Latest-GGUF/resolve/main/MPhi_Latest.f16.gguf) | f16 | 2.9 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->