modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
DerinSozen/bert-pair-clf
|
DerinSozen
| 2025-08-15T03:39:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-15T03:39:17Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-pair-clf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-pair-clf
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3580
- Accuracy: 0.86
- F1: 0.8364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 0.3174 | 2.3529 | 200 | 0.3580 | 0.86 | 0.8364 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
guangyaoz/dpo
|
guangyaoz
| 2025-08-15T02:42:40Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-07-31T05:09:42Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for dpo
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="guangyaoz/dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.20.0
- Transformers: 4.53.2
- Pytorch: 2.7.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
razor534/blockassist-bc-lazy_extinct_termite_1755218316
|
razor534
| 2025-08-15T00:40:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lazy extinct termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T00:40:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lazy extinct termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ShihteSiao/Talkia_LoRA
|
ShihteSiao
| 2025-08-15T00:27:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T11:01:57Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ShihteSiao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
adcg1355/final_bart_model
|
adcg1355
| 2025-08-15T00:24:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T00:24:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_pnas_layer_10_3_all_37_0.0001_640_1
|
winnieyangwannan
| 2025-08-14T23:30:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T23:28:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
miladatefi/MicroAI-7B-Adapters
|
miladatefi
| 2025-08-14T22:54:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"region:us"
] |
text-generation
| 2025-08-14T20:50:32Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
Abdalwhababdalgadeer/577ii
|
Abdalwhababdalgadeer
| 2025-08-14T22:12:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T22:12:08Z |
---
license: apache-2.0
---
|
richardcai/modernbert-tmf-4
|
richardcai
| 2025-08-14T22:08:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-14T22:05:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts-down_pnas_layer_10_3_all_37_0.001_8960_1
|
winnieyangwannan
| 2025-08-14T21:59:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T21:58:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Emsalettin/blockassist-bc-alert_prickly_skunk_1755208564
|
Emsalettin
| 2025-08-14T21:56:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert prickly skunk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T21:56:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert prickly skunk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Renu-Sara-Alexander-Viral-Video-Clip/New.full.videos.Renu.Sara.Alexander.Viral.Video.Official.Tutorial
|
Renu-Sara-Alexander-Viral-Video-Clip
| 2025-08-14T21:53:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-14T21:53:35Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
darknoon/svg-stack-filtered-sft-qwen2.5-vl-7b-trl-10k
|
darknoon
| 2025-08-14T21:46:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-14T21:42:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FastFlowLM/Qwen3-8B-NPU2
|
FastFlowLM
| 2025-08-14T20:49:53Z | 78 | 0 |
transformers
|
[
"transformers",
"qwen3",
"text-generation",
"qwen",
"conversational",
"en",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-21T19:47:16Z |
---
base_model: Qwen/Qwen3-8B
language:
- en
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
license: apache-2.0
tags:
- qwen3
- qwen
- transformers
---
# Qwen3-8B
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser deepseek-r1
```
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
> Please refer to our documentation for [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) and [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> **Note**
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
AAAAnsah/Llama-3.2-1B_ES_theta_1.7
|
AAAAnsah
| 2025-08-14T20:40:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T18:33:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FastFlowLM/Qwen3-1.7B-NPU2
|
FastFlowLM
| 2025-08-14T20:37:48Z | 42 | 0 |
transformers
|
[
"transformers",
"qwen3",
"text-generation",
"qwen",
"unsloth",
"conversational",
"en",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-21T22:31:40Z |
---
base_model: Qwen/Qwen3-1.7B
language:
- en
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
license: apache-2.0
tags:
- qwen3
- qwen
- unsloth
- transformers
---
# Qwen3-1.7B
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-1.7B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 1.7B
- Number of Paramaters (Non-Embedding): 1.4B
- Number of Layers: 28
- Number of Attention Heads (GQA): 16 for Q and 8 for KV
- Context Length: 32,768
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-1.7B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
- vLLM:
```shell
vllm serve Qwen/Qwen3-1.7B --enable-reasoning --reasoning-parser deepseek_r1
```
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-1.7B --reasoning-parser deepseek-r1
```
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
> Please refer to [our documentation](https://qwen.readthedocs.io/) for more details.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-1.7B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> **Note**
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-1.7B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
AAAAnsah/Llama-3.2-1B_BMA_theta_1.9
|
AAAAnsah
| 2025-08-14T20:37:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T18:30:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JermemyHaschal/llama-joycaption-beta-one-hf-llava-gguf
|
JermemyHaschal
| 2025-08-14T20:37:07Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-14T19:27:42Z |
Forked from Mungert/llama-joycaption-beta-one-hf-llava-GGUF and combined with https://huggingface.co/concedo/llama-joycaption-beta-one-hf-llava-mmproj-gguf for my personal use with PocketPal
|
je-suis-tm/monica_bellucci_lora_flux_nf4
|
je-suis-tm
| 2025-08-14T20:35:03Z | 47 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"qlora",
"flux",
"nf4",
"template:diffusion-lora",
"dataset:je-suis-tm/monica_bellucci_lora_flux_nf4",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-08-11T05:16:55Z |
---
tags:
- text-to-image
- lora
- qlora
- flux
- nf4
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/2025-08-11_14-08-08.png
text: >-
Training With QLoRA: Monica Bellucci with long straight dark hair cascading down her back, dons a black silky V-neck dress adorned with ruffles. Her left arm rests on a white fur, her body angled toward the camera. The composition is straightforward, emphasizing her face and upper body against a plain, blurred background. Black dominates, contrasting with her skin and hair, while the white surface brightens her features. The atmosphere is elegant and sophisticated, conveying her confidence through posture and expression. Soft, natural lighting without harsh shadows indicates an indoor setting. The minimalist, classic style combines simplicity with sophistication, using monochromatic tones and formal attire to create a timeless look.
- output:
url: images/2025-08-11_06-39-29.png
text: >-
Training Without QLoRA: Monica Bellucci with long straight dark hair cascading down her back, dons a black silky V-neck dress adorned with ruffles. Her left arm rests on a white fur, her body angled toward the camera. The composition is straightforward, emphasizing her face and upper body against a plain, blurred background. Black dominates, contrasting with her skin and hair, while the white surface brightens her features. The atmosphere is elegant and sophisticated, conveying her confidence through posture and expression. Soft, natural lighting without harsh shadows indicates an indoor setting. The minimalist, classic style combines simplicity with sophistication, using monochromatic tones and formal attire to create a timeless look.
- output:
url: images/2025-08-11_17-33-50.png
text: >-
Testing With QLoRA: Monica Bellucci with blond bobbed hairstyle. wearing a fashion suit and gold neckless. In a Parisian street at night. she is smoking and looking at the camera. shot on polaroid film with front flash light. low angle view --ar 2:3 --style raw --profile o6rydcy --stylize 0 --v 6.1
- output:
url: images/2025-08-11_17-47-05.png
text: >-
Testing Without QLoRA: Monica Bellucci with blond bobbed hairstyle. wearing a fashion suit and gold neckless. In a Parisian street at night. she is smoking and looking at the camera. shot on polaroid film with front flash light. low angle view --ar 2:3 --style raw --profile o6rydcy --stylize 0 --v 6.1
- output:
url: images/2025-08-11_14-01-22.png
text: >-
Testing With QLoRA: Monica Bellucci in a white kimono of cherry blossom patterns
- output:
url: images/2025-08-11_15-12-37_0.png
text: >-
Testing Without QLoRA: Monica Bellucci in a white kimono of cherry blossom patterns
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: monica bellucci, lora, qlora, flux, nf4
license: mit
datasets:
- je-suis-tm/monica_bellucci_lora_flux_nf4
---
# Monica Bellucci Lora Flux NF4
<Gallery />
The QLoRA fine-tuning process of `monica_bellucci_lora_flux_nf4` takes inspiration from [this post (https://huggingface.co/blog/diffusers-quantization)](https://huggingface.co/blog/diffusers-quantization). The training was executed on a local computer with the same parameters as the link mentioned above, which took around 14 hours on 8GB VRAM 4060. The peak VRAM usage was around 7.7GB. To avoid running low on VRAM, **both transformers and text_encoder were quantized.** All the images generated here are using the below parameters
* Height: 512
* Width: 512
* Guidance scale: 5
* Num inference steps: 20
* Max sequence length: 512
* Seed: 0
## Usage
```python
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel
from transformers import T5EncoderModel
text_encoder_4bit = T5EncoderModel.from_pretrained(
"hf-internal-testing/flux.1-dev-nf4-pkg", subfolder="text_encoder_2",torch_dtype=torch.float16,)
transformer_4bit = FluxTransformer2DModel.from_pretrained(
"hf-internal-testing/flux.1-dev-nf4-pkg", subfolder="transformer",torch_dtype=torch.float16,)
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.float16,
transformer=transformer_4bit,text_encoder_2=text_encoder_4bit)
pipe.load_lora_weights("je-suis-tm/monica_bellucci_lora_flux_nf4",
weight_name='pytorch_lora_weights.safetensors')
prompt="Monica Bellucci in a white kimono of cherry blossom patterns"
image = pipe(
prompt,
height=512,
width=512,
guidance_scale=5,
num_inference_steps=20,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0),
).images[0]
image.save("monica_bellucci_lora_flux_nf4.png")
```
## Trigger words
You should use `Monica Bellucci` to trigger the image generation.
## Download model
[Download](/je-suis-tm/monica_bellucci_lora_flux_nf4/tree/main) them in the Files & versions tab.
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755203590
|
Ferdi3425
| 2025-08-14T20:34:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T20:34:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
myfi/parser_model_ner_gemma_v0.1
|
myfi
| 2025-08-14T20:22:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T20:20:11Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chainway9/blockassist-bc-untamed_quick_eel_1755200127
|
chainway9
| 2025-08-14T20:01:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T20:01:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Klear-Reasoner-8B-SFT-GGUF
|
mradermacher
| 2025-08-14T19:55:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:Suu/KlearReasoner-MathSub-30K",
"dataset:Suu/KlearReasoner-CodeSub-15K",
"base_model:Suu/Klear-Reasoner-8B-SFT",
"base_model:quantized:Suu/Klear-Reasoner-8B-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-14T13:55:37Z |
---
base_model: Suu/Klear-Reasoner-8B-SFT
datasets:
- Suu/KlearReasoner-MathSub-30K
- Suu/KlearReasoner-CodeSub-15K
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Suu/Klear-Reasoner-8B-SFT
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Klear-Reasoner-8B-SFT-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Klear-Reasoner-8B-SFT-GGUF/resolve/main/Klear-Reasoner-8B-SFT.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DungND1107/cpo-full-contrastive-fixed-v5_checkpoint_1300
|
DungND1107
| 2025-08-14T19:46:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T19:46:42Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DungND1107/cpo-full-contrastive-fixed-v5_checkpoint_800
|
DungND1107
| 2025-08-14T19:39:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T19:39:16Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adameubanks/word_embeddings_over_20_years
|
adameubanks
| 2025-08-14T19:37:00Z | 0 | 0 | null |
[
"fineweb",
"word2vec",
"embeddings",
"news-data",
"dataset:HuggingFaceFW/fineweb",
"region:us"
] | null | 2025-08-14T19:36:08Z |
---
datasets:
- HuggingFaceFW/fineweb
tags:
- fineweb
- word2vec
- embeddings
- news-data
---
|
NullAxis/karmix-merge-experiments
|
NullAxis
| 2025-08-14T19:21:47Z | 344 | 6 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"en",
"base_model:chemwolf/Karmix-XL-v0",
"base_model:finetune:chemwolf/Karmix-XL-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-05-13T17:38:18Z |
---
language:
- en
pipeline_tag: text-to-image
tags:
- safetensors
- diffusers
base_model:
- chemwolf/Karmix-XL-v0
library_name: diffusers
license: other
---
This repository is an offshoot from the main **Karmix-XL v0** (https://huggingface.co/chemwolf/Karmix-XL-v0) model with experimental versions of the model using various methods/parameters for merging.
**The models are provided "as is", and due to the experimentation, perfect work is not guaranteed.**
These versions **are not** release versions and remain only prototypes that do not reflect the future of the model.
Additional information about the existing versions will be added later.
|
nde-dilan/kasahorow_mt_en_kwl
|
nde-dilan
| 2025-08-14T19:10:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ROMANCE",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ROMANCE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-08-14T19:09:53Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ROMANCE
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: kasahorow_mt_en_kwl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kasahorow_mt_en_kwl
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0222
- Bleu: 94.0348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1174 | 1.0 | 394 | 0.0857 | 72.7016 |
| 0.0504 | 2.0 | 788 | 0.0411 | 86.9164 |
| 0.032 | 3.0 | 1182 | 0.0285 | 91.1270 |
| 0.014 | 4.0 | 1576 | 0.0241 | 93.1147 |
| 0.012 | 5.0 | 1970 | 0.0222 | 94.0348 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ns69956/deltaTCP_35K
|
ns69956
| 2025-08-14T19:09:41Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T19:02:19Z |
---
license: apache-2.0
---
|
vespasianvindex/ppo-LunarLander-v2
|
vespasianvindex
| 2025-08-14T18:52:26Z | 10 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-13T19:59:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.83 +/- 17.59
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
VIDEOS-18-Laura-Mendoza-viral-video-Clip/New.full.videos.Laura.Mendoza.viral.video.Official.Tutorial
|
VIDEOS-18-Laura-Mendoza-viral-video-Clip
| 2025-08-14T18:48:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-14T18:48:39Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
0xtherichard/blockassist-bc-snappy_rugged_caribou_1755192276
|
0xtherichard
| 2025-08-14T18:44:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snappy rugged caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T18:24:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snappy rugged caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Theros/ColdBrew-12B-Nemo-Orion-test0-Q4_K_M-GGUF
|
Theros
| 2025-08-14T18:42:31Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:SvalTek/ColdBrew-12B-Nemo-Orion-test0",
"base_model:quantized:SvalTek/ColdBrew-12B-Nemo-Orion-test0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T18:41:58Z |
---
base_model: SvalTek/ColdBrew-12B-Nemo-Orion-test0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# Theros/ColdBrew-12B-Nemo-Orion-test0-Q4_K_M-GGUF
This model was converted to GGUF format from [`SvalTek/ColdBrew-12B-Nemo-Orion-test0`](https://huggingface.co/SvalTek/ColdBrew-12B-Nemo-Orion-test0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SvalTek/ColdBrew-12B-Nemo-Orion-test0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Theros/ColdBrew-12B-Nemo-Orion-test0-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-orion-test0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Theros/ColdBrew-12B-Nemo-Orion-test0-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-orion-test0-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Theros/ColdBrew-12B-Nemo-Orion-test0-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-orion-test0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Theros/ColdBrew-12B-Nemo-Orion-test0-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-orion-test0-q4_k_m.gguf -c 2048
```
|
liu-nlp/EuroLLM-2B-smol-smoltalk-sv
|
liu-nlp
| 2025-08-14T18:38:45Z | 49 | 0 | null |
[
"safetensors",
"llama",
"text-generation",
"conversational",
"sv",
"dataset:HuggingFaceTB/smol-smoltalk",
"base_model:utter-project/EuroLLM-1.7B",
"base_model:finetune:utter-project/EuroLLM-1.7B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-08T08:13:41Z |
---
license: apache-2.0
datasets:
- HuggingFaceTB/smol-smoltalk
language:
- sv
base_model:
- utter-project/EuroLLM-1.7B
pipeline_tag: text-generation
---
# Model Card for EuroLLM-2b-smol-smoltalk-sv-en
This model is a fine-tuned version of [utter-project/EuroLLM-1.7B](https://huggingface.co/utter-project/EuroLLM-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl) on a Swedish version of [smol-smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk) that was translated with [Gemma3-27B](google/gemma-3-27b-it).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="liu-nlp/EuroLLM-2b-smol-smoltalk-sv", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.5.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755194892
|
indoempatnol
| 2025-08-14T18:34:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T18:34:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jinyu220/gaze_model_av_aloha_real_put_coin_v2
|
Jinyu220
| 2025-08-14T18:30:24Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-14T18:30:16Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Halocline/blockassist-bc-meek_lanky_snake_1755195363
|
Halocline
| 2025-08-14T18:17:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek lanky snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T18:17:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek lanky snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zosiapoludzku/marikax
|
zosiapoludzku
| 2025-08-14T18:08:16Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-14T17:27:10Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
DungND1107/regtegt
|
DungND1107
| 2025-08-14T18:06:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:VLSP2025-LegalSML/qwen3-1.7b-legal-pretrain",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:VLSP2025-LegalSML/qwen3-1.7b-legal-pretrain",
"region:us"
] |
text-generation
| 2025-08-14T18:06:20Z |
---
base_model: VLSP2025-LegalSML/qwen3-1.7b-legal-pretrain
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:VLSP2025-LegalSML/qwen3-1.7b-legal-pretrain
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1755192488
|
maxibillion1975
| 2025-08-14T17:55:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T17:55:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sionic-ai/mlx-gemma-3-270m-it
|
sionic-ai
| 2025-08-14T17:45:41Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gemma3_text",
"gemma3",
"gemma",
"google",
"text-generation",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"license:gemma",
"region:us"
] |
text-generation
| 2025-08-14T17:44:58Z |
---
base_model: google/gemma-3-270m-it
license: gemma
tags:
- gemma3
- gemma
- google
- mlx
pipeline_tag: text-generation
library_name: mlx
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# sionic-ai/mlx-gemma-3-270m-it
This model [sionic-ai/mlx-gemma-3-270m-it](https://huggingface.co/sionic-ai/mlx-gemma-3-270m-it) was
converted to MLX format from [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("sionic-ai/mlx-gemma-3-270m-it")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
VoilaRaj/etadpu_S3q9yr
|
VoilaRaj
| 2025-08-14T17:45:16Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-14T17:43:22Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
MaestroDev19/CyberGemma-3-1b
|
MaestroDev19
| 2025-08-14T17:43:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T17:43:08Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MaestroDev19
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
johnkmcleod/etcl-texture
|
johnkmcleod
| 2025-08-14T17:23:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-14T17:23:38Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: undefined
instance_prompt: etcl-textured-cartoon
license: other
---
# etcl texture
<Gallery />
## Model description
## Trigger words
You should use `etcl-textured-cartoon` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/johnkmcleod/etcl-texture/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-krea-trainer](https://fal.ai/models/fal-ai/flux-krea-trainer).
|
pobiiiiiii/blockassist-bc-ravenous_yapping_ferret_1755191641
|
pobiiiiiii
| 2025-08-14T17:14:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ravenous yapping ferret",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T17:14:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ravenous yapping ferret
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sammau/wav2vec2-large-xls-r-300m-dv
|
Sammau
| 2025-08-14T16:57:31Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-14T06:17:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dgambettaphd/M_llm3_run2_gen5_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-14T16:45:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T16:44:50Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_2_rounds_2_0_iter_0_prover1_17551
|
neural-interactive-proofs
| 2025-08-14T16:42:48Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T16:42:03Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_2_rounds_2_0_iter_0_prover1_17551
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_2_rounds_2_0_iter_0_prover1_17551
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_2_rounds_2_0_iter_0_prover1_17551", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-14_17-30-41_cv_qwen2.5_32B_prover_debate_2_rounds_2_0_iter_0_prover1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
wherobots/ftw-aoti-torch280-cu126-pt2
|
wherobots
| 2025-08-14T16:16:01Z | 0 | 0 | null |
[
"image-segmentation",
"license:cc-by-3.0",
"region:us"
] |
image-segmentation
| 2025-08-14T15:53:47Z |
---
license: cc-by-3.0
pipeline_tag: image-segmentation
---
|
LINK-Clip-bhavana-menon-case-viral-video/NEW.FULL.videos.bhavana.menon.case.Viral.Video.Official.Tutorial
|
LINK-Clip-bhavana-menon-case-viral-video
| 2025-08-14T16:08:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-14T16:08:31Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_2_iter_3_provers
|
neural-interactive-proofs
| 2025-08-14T16:04:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T16:03:22Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_2_iter_3_provers
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_2_iter_3_provers
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_2_iter_3_provers", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-14_16-13-57_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_2_iter_3_provers_group)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
OllmOne/Kokoro-82M-v1.0-ONNX
|
OllmOne
| 2025-08-14T15:58:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T15:58:24Z |
---
license: apache-2.0
---
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755185366
|
indoempatnol
| 2025-08-14T15:58:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T15:58:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/toilpxe_ai26Re
|
VoilaRaj
| 2025-08-14T15:48:16Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-14T15:46:16Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF
|
mradermacher
| 2025-08-14T15:47:42Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TMLR-Group-HF/Majority-Voting-Qwen2.5-3B",
"base_model:quantized:TMLR-Group-HF/Majority-Voting-Qwen2.5-3B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-14T15:13:12Z |
---
base_model: TMLR-Group-HF/Majority-Voting-Qwen2.5-3B
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/TMLR-Group-HF/Majority-Voting-Qwen2.5-3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Majority-Voting-Qwen2.5-3B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen2.5-3B-i1-GGUF/resolve/main/Majority-Voting-Qwen2.5-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
svjack/Collei_wan_2_2_5B_text2video_lora
|
svjack
| 2025-08-14T15:33:19Z | 2 | 0 | null |
[
"region:us"
] | null | 2025-08-06T16:03:16Z |
### **LoRA Model Card**: `svjack/Collei_wan_2_2_5B_text2video_lora`
#### **Anime-Style Video Synthesis with COLLEI Character**
**Base Model**: `Wan2.2_T2V_5B`
**Fine-tuned Adapter**: `collei.safetensors`
**Key Strengths**:
- **Expressive Character Dynamics**: Captures COLLEI’s signature green hair, purple eyes, and playful expressions.
- **Environmental Integration**: Seamless blending of natural elements (cherry blossoms, sunlight, water) with character actions.
- **Texture Detail**: Renders intricate outfits (swimsuits, hoodies) and accessories (hairpins, jewelry) with high fidelity.
---
### **Optimized Example Prompts**
#### **Example 1: Park Bench Serenity**
**Prompt**:
```bash
Digital anime-style, COLLEI, a blushing green-haired character with large purple eyes and delicate nose, sits on a sunlit park bench. She wears an ornate green-and-gold high-collar outfit, loose wavy hair adorned with a black-green hairpin, licking an ice cream cone. Cherry blossom petals drift around her; dappled shadows stretch across the grassy path. Soft gradient background blends into the serene outdoor scene.
```
**Key Features**:
- **Lighting**: Sunlight highlights her hairpin and ice cream.
- **Motion**: Petals and subtle head tilt convey gentle movement.
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/k7xgWC9ngizbe0DEWI2DD.mp4"></video>
---
#### **Example 2: Poolside Splash**
**Prompt**:
```bash
Vibrant anime-style, COLLEI mid-swim in a sleek black swimsuit with green accents, water splashing around her. Sunlight filters through turquoise waves, creating shimmering patterns on her skin. Cherry blossoms float on the surface; background gradients from poolside to summer sky. Signature hairpin glints as she pushes through water.
```
**Key Features**:
- **Water Effects**: Realistic splashes and light refraction.
- **Character Pose**: Dynamic swimming motion with flushed cheeks.
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/gqA8JItAYzKZaSgXllonK.mp4"></video>
---
#### **Example 3: Diner Delight**
**Prompt**:
```bash
Charming anime-style, COLLEI in a green hoodie with gold accents, biting into a juicy burger at a cozy diner booth. Large purple eyes sparkle; cheeks flush with delight. Checkered tablecloth, fries, and milkshake frame the scene. Sunlight streams through windows, highlighting retro neon signs and stray cherry blossom petals.
```
**Key Features**:
- **Food Detail**: Glossy burger bun and whipped cream textures.
- **Ambiance**: Warm diner lighting contrasts with cool exterior gradients.
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/UgSZs1oe-wLsomlK0M5eb.mp4"></video>
---
#### **Example 4: Sing a Song**
**Prompt**:
```bash
Digital anime-style, COLLEI, a blushing green-haired character with large purple eyes and delicate nose, stands center stage under neon spotlights.
Her emerald waves shimmer with fiber-optic strands, while her black-green hairpin projects holographic cherry blossoms. She wears a
high-collar gold-trimmed outfit, now glowing with bioluminescent seams as she sings into a frosted microphone.
Around her, laser grids cut through fog, casting dappled neon shadows on the stage.
Holographic petals dissolve into pixel dust, and her voice sends rippling soundwave orbs into the cheering crowd.
anime, comic, game CG, illustration
best quality, masterpiece, highres, ultra-detailed, 8k, HDR, UHD
unreal engine rendered, physically-based rendering, sharp focus, RAW photo
extremely detailed CG, finely detail, Studio lighting, Vivid Colors
```
**Key Features**:
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/pZp6jIB1ea_VViZCeNf5R.mp4"></video>
---
### **Technical Parameters**
| Setting | Recommendation | Notes |
|------------------|--------------------|----------------------------------------|
| **CFG Scale** | 1 (Fixed) | Wan2.2 architecture requirement |
| **Sampler** | uni_pc | Optimal for fabric/water dynamics |
| **Steps** | 25 | Balances detail & speed |
| **Resolution** | 1280x704 | Maximizes VRAM efficiency |
| **Motion Factor**| 4-6 | Enhances environmental FX (e.g., splashes, petals) |
---
### **Performance Profile**
- **VRAM Consumption**: ~24GB at 1280x704
- **Troubleshooting**:
- **Water Artifacts**: Add `ripple distortion, caustics` to negative prompts.
- **Lighting Issues**: Use `softglow` node at 0.3 strength for diner scenes.
- **Character Consistency**: Increase COLLEI token weight by 1.2x.
### **License**
CC-BY-NC-SA 4.0 (Non-commercial, share-alike)
**Community Hub**: https://huggingface.co/svjack/Collei_wan_2_2_5B_text2video_lora/discussions
|
chainway9/blockassist-bc-untamed_quick_eel_1755182695
|
chainway9
| 2025-08-14T15:13:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T15:13:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ZPM07/Sculptor_NER
|
ZPM07
| 2025-08-14T15:11:58Z | 0 | 1 | null |
[
"safetensors",
"bert",
"ner",
"named-entity-recognition",
"russian",
"sports",
"fitness",
"work out",
"ru",
"license:mit",
"region:us"
] | null | 2025-08-14T14:51:14Z |
---
language:
- ru
tags:
- ner
- named-entity-recognition
- russian
- sports
- fitness
- work out
license: mit
---
# sculptor_NER: Модель для извлечения спортивных сущностей
Модель для распознавания именованных сущностей (NER) в спортивных текстах на русском языке. Разработана для проекта **Sculptor Sport**.
## Типы сущностей
- `EXERCISE` - название упражнения (например: "приседания", "жим штанги лежа")
- `QUANTITY` - количественные характеристики:
- вес (кг, lb)
- количество подходов/повторений
- время выполнения (секунды, минуты)
- дистанция (метры, км)
- угол наклона (градусы)
## Использование
```python
from transformers import pipeline
ner = pipeline("ner", model="ZPM07/sculptor_NER")
text = "3 подхода по 10 приседаний с весом 50 кг"
results = ner(text)
for entity in results:
print(f"{entity['word']} -> {entity['entity']} (confidence: {entity['score']:.2f})")
```
## Пример вывода
Найденные сущности:
- 3 подхода -> QUANTITY (доверие: 1.00, позиция: 0-9)
- 10 -> QUANTITY (доверие: 1.00, позиция: 13-15)
- приседаний -> EXERCISE (доверие: 0.99, позиция: 16-26)
- 50 кг -> QUANTITY (доверие: 1.00, позиция: 35-40)
|
OpenVINO/llava-v1.6-mistral-7b-hf-fp16-ov
|
OpenVINO
| 2025-08-14T15:06:16Z | 0 | 0 | null |
[
"openvino",
"llava_next",
"vision",
"image-text-to-text",
"conversational",
"en",
"base_model:llava-hf/llava-v1.6-mistral-7b-hf",
"base_model:finetune:llava-hf/llava-v1.6-mistral-7b-hf",
"license:apache-2.0",
"region:us"
] |
image-text-to-text
| 2025-08-14T14:44:12Z |
---
license: apache-2.0
tags:
- vision
- image-text-to-text
language:
- en
pipeline_tag: image-text-to-text
inference: true
base_model:
- llava-hf/llava-v1.6-mistral-7b-hf
---
# llava-v1.6-mistral-7b-hf-fp16-ov
* Model creator: [llava-hf](https://huggingface.co/llava-hf)
* Original model: [llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf)
## Description
This is [llava-hf/llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format.
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.2.0 and higher
* Optimum Intel 1.26.0 and higher
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install --pre -U --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/pre-release openvino openvino-tokenizers openvino-genai
pip install huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/llava-v1.6-mistral-7b-hf-fp16-ov"
model_path = "llava-v1.6-mistral-7b-hf-fp16-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
1. Run model inference:
```
import openvino_genai as ov_genai
import requests
from PIL import Image
from io import BytesIO
import numpy as np
import openvino as ov
device = "CPU"
pipe = ov_genai.VLMPipeline(model_path, device)
def load_image(image_file):
if isinstance(image_file, str) and (image_file.startswith("http") or image_file.startswith("https")):
response = requests.get(image_file)
image = Image.open(BytesIO(response.content)).convert("RGB")
else:
image = Image.open(image_file).convert("RGB")
image_data = np.array(image.getdata()).reshape(1, image.size[1], image.size[0], 3).astype(np.byte)
return ov.Tensor(image_data)
prompt = "What is unusual on this picture?"
url = "https://github.com/openvinotoolkit/openvino_notebooks/assets/29454499/d5fbbd1a-d484-415c-88cb-9986625b7b11"
image_tensor = load_image(url)
def streamer(subword: str) -> bool:
print(subword, end="", flush=True)
return False
pipe.start_chat()
output = pipe.generate(prompt, image=image_tensor, max_new_tokens=100, streamer=streamer)
pipe.finish_chat()
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
## Limitations
Check the original [model card](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) for limitations.
## Legal information
The original model is distributed under [apache-2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md) license. More details can be found in [original model card](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
MANSTAGE/self-analysis-dataset
|
MANSTAGE
| 2025-08-14T15:01:34Z | 0 | 1 | null |
[
"gguf",
"llama",
"dataset:MANSTAGE/self-analysis-dataset",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T13:13:05Z |
---
datasets:
- MANSTAGE/self-analysis-dataset
---
|
Arkodev/hindi-bert-sentiment
|
Arkodev
| 2025-08-14T15:01:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-14T15:01:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lmstudio-community/gemma-3-270m-it-GGUF
|
lmstudio-community
| 2025-08-14T14:53:42Z | 0 | 3 | null |
[
"gguf",
"base_model:google/gemma-3-270m-it",
"base_model:quantized:google/gemma-3-270m-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-14T14:51:23Z |
---
base_model: google/gemma-3-270m-it
license: gemma
tags:
- gguf
---
## 💫 Community Model> gemma-3-270m-it by google
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator**: [google](https://huggingface.co/google)<br>
**Original model**: [gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it)<br>
**GGUF quantization**: provided by [LM Studio team](https://x.com/lmstudio) using `llama.cpp` release [b6153](https://github.com/ggerganov/llama.cpp/releases/tag/b6153)<br>
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
aochongoliverli/Qwen2.5-3B-math8k-distill-Qwen3-32B-16k-5epochs-2e-5lr-step600
|
aochongoliverli
| 2025-08-14T14:46:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T14:41:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alansary/hudex-hallucination-detector-testing
|
alansary
| 2025-08-14T14:42:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/llama-3.1-8b-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"region:us"
] |
text-generation
| 2025-08-14T14:29:39Z |
---
base_model: unsloth/llama-3.1-8b-unsloth-bnb-4bit
library_name: peft
model_name: hudex-model
tags:
- base_model:adapter:unsloth/llama-3.1-8b-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
licence: license
pipeline_tag: text-generation
---
# Model Card for hudex-model
This model is a fine-tuned version of [unsloth/llama-3.1-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/llama-3.1-8b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- PEFT 0.17.0
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
TAUR-dev/M-sft1e-5_ppo_countdown3arg_format0.3-rl
|
TAUR-dev
| 2025-08-14T14:41:10Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"en",
"license:mit",
"region:us"
] | null | 2025-08-14T14:40:01Z |
---
language: en
license: mit
---
# M-sft1e-5_ppo_countdown3arg_format0.3-rl
## Model Details
- **Training Method**: VeRL Reinforcement Learning (RL)
- **Stage Name**: rl
- **Experiment**: sft1e-5_ppo_countdown3arg_format0.3
- **RL Framework**: VeRL (Versatile Reinforcement Learning)
## Training Configuration
## Experiment Tracking
🔗 **View complete experiment details**: https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__sft1e-5_ppo_countdown3arg_format0.3__v1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-sft1e-5_ppo_countdown3arg_format0.3-rl")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-sft1e-5_ppo_countdown3arg_format0.3-rl")
```
|
nlee-208/limo_S-dsr7b_T-q32b_10
|
nlee-208
| 2025-08-14T14:34:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T11:12:18Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
library_name: transformers
model_name: limo_S-dsr7b_T-q32b_10
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for limo_S-dsr7b_T-q32b_10
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nlee-208/limo_S-dsr7b_T-q32b_10", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nlee28/cross1/runs/so1iq3u4)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.3
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ronx-labs/affine-081410
|
ronx-labs
| 2025-08-14T14:32:46Z | 0 | 0 | null |
[
"gguf",
"gpt_oss",
"gpt-oss",
"openai",
"mxfp4",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"reasoning",
"thinking",
"r1",
"cot",
"deepseek",
"128k context",
"general usage",
"problem solving",
"brainstorming",
"solve riddles",
"Neo",
"MOE",
"Mixture of Experts",
"24 experts",
"NEO Imatrix",
"Code Imatrix",
"DI-Matrix",
"Dual Imatrix",
"Imatrix",
"text-generation",
"en",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-08-14T14:28:52Z |
---
license: apache-2.0
base_model:
- openai/gpt-oss-20b
language:
- en
pipeline_tag: text-generation
tags:
- gpt_oss
- gpt-oss
- openai
- mxfp4
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- reasoning
- thinking
- r1
- cot
- deepseek
- 128k context
- general usage
- problem solving
- brainstorming
- solve riddles
- general usage
- openai
- Neo
- MOE
- Mixture of Experts
- 24 experts
- NEO Imatrix
- Code Imatrix
- DI-Matrix
- Dual Imatrix
- Imatrix
---
<small><font color="red">Specialized quants (including "MXFP4_MOE.gguf") for new OpenAI 20B MOE - Mixture of Experts Model at 80+ t/s.</font></small>
<h2>Openai_gpt-oss-20b-CODER-NEO-CODE-DI-MATRIX-GGUF</h2>
<img src="matrix2.webp" style="float:right; width:300px; height:300px; padding:10px;">
These are DUEL NEO Imatrix / CODE IMATRIX GGUFs, NEO dataset by DavidAU with "Code" dataset compiled by DavidAU.
NEO dataset improves overall performance.
CODER dataset is specifically for coding performance.
Example generation provided at bottom of this page.
DUEL ("DI")-> Separate Imatrix datasets ("NEO" and "CODER" - generated separately per model) are co-joined to create a new Imatrix dataset, which is then applied to the quants.
Model also passed "hard" coding test too (4 experts); no issues (IQ4_NL).
TEST: Forcing the model to create code with no dependencies and limits of coding short cuts, with multiple loops, and in real time with no blocking in a language that does not support it normally.
(example prompt, and generation below)
Due to quanting issues with this model (which result in oddball quant sizes / mixtures), only TESTED quants will be uploaded (at the moment).
Currently that means IQ4_NL, and <B>MXFP4_MOE</b> (a new OpenAI specific quant type) are available.
NEO/Coder datasets performance improvements will show the most in the IQ4_NL quants.
There are TWO "IQ4_NL" quants:
- OpenAI-20B-NEO-CODE-DIMAT-IQ4_NL.gguf : DI Imatrix applied.
- OpenAI-20B-NEO-CODE-DIMAT-2-IQ4_NL.gguf : DI Imatrix applied, including output tensor (also imatrixed), and embed tensor at IQ4_NL.
There are THREE NEO MXFP4_MOE quants:
- OpenAI-20B-NEO-CODE-DIMAT-MXFP4_MOE2.gguf : Output tensor Q5_1 (DI Imatrix applied)
- OpenAI-20B-NEO-CODE-DIMAT-MXFP4_MOE3.gguf : Output tensor IQ4_NL (DI Imatrix applied)
- OpenAI-20B-NEO-CODE-DIMAT-MXFP4_MOE4.gguf : Output tensor IQ4_NL (DI Imatrix applied) AND Embed at IQ4_NL - this makes this quant the smallest version.
MXFP4_MOE quants vastly outperform (at the moment) all other quants, except IQ4_NL, Q5_1 and Q8_0 due to odd
issues compressing OpenAI's 20B model due to odd "tensor" dimensions.
( IQ4_NL, Q5_1 and Q8_0 quants are compatible with OpenAI's tensor structure as of this writing. )
MXFP4_MOE quant was specifically designed for (what appears to be) loss less quantization relative to source model. Jury still out on this.
Suggested Settings (tested in Lmstudio, Beta Branch 0.3.21 ; 4 ):
- Context: 8k min.
- Temp 1.2+ for creative. Temp .6 (or so) for coding/general.
- Rep pen 1.1, topk 40, topp .95, min p 0.05
- Experts 4-8 depending on use case. (higher than 8 MAY lower quality)
Model Supports:
- 128k context
- up to 24 experts
- Tools use, browsing, etc
For my help docs, SETTING NUMBER OF EXPERTS, and other see below.
See more about this model here:
https://huggingface.co/openai/gpt-oss-20b
Please refer to their model card, especially to control "thinking" levels.
---
<H2>Help, Adjustments, Samplers, Parameters and More</H2>
---
<B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
See this document:
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
---
<h2>Example Generation</h2>
Using default settings (coder), 4 experts and no system prompt.
---
Prompt:
<b>
Write a JavaScript for fetch api using 4 ais, all using streaming that does NOT use "promise.all", and processes all streams separately but in real time - that means all streams need to be processed at the same time and not interrupt each other.
Also include sending a payload to each of the 4 ais with content, system prompt, temp, topk, topp, minp, repeat penalty ; all of these parameters will be in an array for each of the four ais.
Record the entire response - chunks - from each AI in an array - text.
Also, send the chunks in real time to 4 separate text boxes on an HTML page so real time readout from each AI can be seen by the user.
IMPORTANT: All streams must be processed at the same time, not in sequence. IMPORTANT: Do not use any dependencies, nor import any module(s).
</b>
OUTPUT:
(note: Pretty print, color and some formatting lost on copy/paste)
<p style="border: none; padding: 0cm">
<span style="display: inline-block; border: none; padding: 0cm"><font color="#ff0000"><b>Thought
for 11.81 seconds</b></font></span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to write JavaScript code that uses fetch API with streaming
responses (likely using ReadableStream from
Response.body.getReader()). We have 4 AI endpoints; each will be
called with a payload containing content, system prompt, temp, topk,
topp, minp, repeat penalty. These parameters are in an array for each
of the four AIs.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to process all streams concurrently, not sequentially, and not
using Promise.all. So we can start fetch calls, get readers, then
read from each reader in separate async functions that run
concurrently (like starting them but not awaiting). We need to send
chunks in real time to 4 separate text boxes on an HTML page.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
cannot use dependencies or import modules; just plain JS.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to record entire response chunks per AI into arrays. So we
maintain array of strings for each AI, push each chunk as it arrives.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to send the payload to each AI with content, system prompt,
temp, topk, topp, minp, repeat penalty. These parameters will be in
an array for each of the four AIs. So maybe we have a config array
like:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">const
aiConfigs = [ {url: '', params: {...}}, ... ];</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
they said "parameters will be in an array for each of the four
ais." So we can define arrays.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to send payload as JSON body. Use fetch with method POST,
headers: Content-Type: application/json, body:
JSON.stringify(payload).</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Then
handle streaming: Response.body is a ReadableStream; getReader();
read loop reading chunks (Uint8Array). Convert to string using
TextDecoder. Append to array and update UI.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to run all four concurrently but not use Promise.all. We can
just start them sequentially but not await each before starting next.
For example:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">function
startAI(index) { ... } Then call for i=0..3: startAI(i); That will
kick off fetch calls; inside we handle reading and UI updates.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to ensure they run concurrently, so no blocking.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
record entire response chunks in an array per AI.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
also need to send the chunks in real time to 4 separate text boxes on
an HTML page. So we need to have <textarea id="ai1">
etc. Then update innerText or value as new chunk arrives.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
must not use dependencies, so no external libs.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to handle errors.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let's
produce code snippet with explanation.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We'll
create a function fetchAndStream(aiIndex) that returns nothing but
handles reading and UI updates.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
also need to define arrays for each AI's parameters. For example:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">const
aiParams = [ {url: '', content: 'Hello', systemPrompt: 'You are ...',
temp: 0.7, topk: 50, topp: 0.95, minp: 0.01, repeatPenalty: 2}, ...
];</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
they said "parameters will be in an array for each of the four
ais." So we can define arrays like:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">const
aiConfigs = [ {url: '', payload: {...}}, ... ];</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to send content, system prompt, temp, topk, topp, minp, repeat
penalty. So we can create payload object accordingly.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to record entire response chunks in an array per AI. We'll
have const responses = [[], [], [], []]; Then push each chunk string
into responses[aiIndex].</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
also need to update UI: document.getElementById('output'+(i+1)).value
+= chunk; or innerText.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let's
produce final code with comments and explanation.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to ensure no Promise.all. So just call startAI for each
index.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to handle fetch errors gracefully.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to use async/await inside each function but not block others.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
can do:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">async
function streamFromAi(index) { ... }</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Then
call streamFromAi(0); streamFromAi(1); etc. Since they are async
functions, calling them without await will start them concurrently.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively
we can wrap in IIFE: (async () => {...})(); But simpler: just call
them.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to ensure that the fetch calls happen concurrently; but if we
use await inside each function, it won't block others because each
function runs independently. So it's fine.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to record entire response chunks from each AI in an array -
we can push chunk string into responses[index].</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now
produce final code snippet with HTML skeleton and JS script.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
also need to mention that the fetch API must support streaming; some
browsers may not support reading body as stream? But modern ones do.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We'll
provide fallback for older browsers maybe but not necessary.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to ensure no dependencies, so just plain JS.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let's
craft final answer: Provide explanation then code snippet.</span></p>
<p style="border: none; padding: 0cm"><br/>
<br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><font color="#ff0000"><b>OUTPUT:</b></font></span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Below
is a <strong>stand‑alone</strong> example that fulfils every
requirement you listed:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">4
independent AI calls</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">All
four streams are read <em>concurrently</em> (no </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">Promise.all</code></span><span style="display: inline-block; border: none; padding: 0cm"> or
any other blocking construct)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Each
call receives its own payload
(</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">content</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">systemPrompt</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">temp</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">topk</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">topp</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">minp</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">repeatPenalty</code></span><span style="display: inline-block; border: none; padding: 0cm">)
– the values are stored in an array of objects.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Every
chunk that arrives is pushed into a per‑AI array and written
immediately to a dedicated </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western"><textarea></code></span><span style="display: inline-block; border: none; padding: 0cm"> so
the user can see the output live.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">No
external libraries or imports – pure vanilla JavaScript.</span></p></li>
</ul>
<hr/>
<h2 class="western" style="border-top: none; border-bottom: 1px solid #3b3b3b; border-left: none; border-right: none; padding-top: 0cm; padding-bottom: 0.05cm; padding-left: 0cm; padding-right: 0cm">
<span style="display: inline-block; border: none; padding: 0cm">1️⃣
HTML skeleton</span></h2>
<pre class="western" style="border: 1px solid #3b3b3b; padding: 0.05cm"><span style="display: inline-block; border: none; padding: 0cm"><code class="western"><!DOCTYPE html></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"><html lang="en"></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"><head></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><meta charset="utf-8"></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><title>Concurrent AI Streaming Demo</title></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><style></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">textarea { width:100%; height:150px; margin-bottom:10px;}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"></style></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"></head></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"><body></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><!-- Four output boxes --></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><textarea id="output0" placeholder="AI #1 …"></textarea></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><textarea id="output1" placeholder="AI #2 …"></textarea></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><textarea id="output2" placeholder="AI #3 …"></textarea></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><textarea id="output3" placeholder="AI #4 …"></textarea></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><!-- Start button --></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><button id="startBtn">Start All 4 AIs</button></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><script src="ai-stream.js"></script> <!-- the script below --></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"></body></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"></html></code></span></pre>
<hr/>
<h2 class="western" style="border-top: none; border-bottom: 1px solid #3b3b3b; border-left: none; border-right: none; padding-top: 0cm; padding-bottom: 0.05cm; padding-left: 0cm; padding-right: 0cm">
<span style="display: inline-block; border: none; padding: 0cm">2️⃣
JavaScript (</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">ai-stream.js</code></span><span style="display: inline-block; border: none; padding: 0cm">)</span></h2>
<pre class="western" style="border: 1px solid #3b3b3b; padding: 0.05cm"><span style="display: inline-block; border: none; padding: 0cm"><code class="western">/* -------------------------------------------------------------</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* 1. Configuration – one object per AI</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* ------------------------------------------------------------- */</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">const aiConfigs = [</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">{</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// URL of the first AI endpoint</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">url: 'https://api.example.com/v1/chat',</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">payload: {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">content: "Hello, what can you do?",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">systemPrompt: "You are a helpful assistant.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">temp: 0.7,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topk: 50,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topp: 0.95,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">minp: 0.01,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">repeatPenalty: 2</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">},</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">{</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">url: 'https://api.example.com/v1/chat',</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">payload: {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">content: "Tell me a joke.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">systemPrompt: "You are a funny chatbot.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">temp: 0.8,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topk: 40,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topp: 0.9,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">minp: 0.02,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">repeatPenalty: 1</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">},</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">{</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">url: 'https://api.example.com/v1/chat',</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">payload: {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">content: "Explain relativity.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">systemPrompt: "You are a physics teacher.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">temp: 0.6,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topk: 60,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topp: 0.97,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">minp: 0.01,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">repeatPenalty: 3</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">},</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">{</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">url: 'https://api.example.com/v1/chat',</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">payload: {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">content: "Give me a recipe for pancakes.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">systemPrompt: "You are a cooking expert.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">temp: 0.5,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topk: 30,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topp: 0.85,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">minp: 0.02,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">repeatPenalty: 2</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">];</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">/* -------------------------------------------------------------</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* 2. Storage for the full responses (array of strings)</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* ------------------------------------------------------------- */</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">const allResponses = [[], [], [], []]; // one array per AI</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">/* -------------------------------------------------------------</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* 3. Helper – read a stream and push chunks to UI & storage</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* ------------------------------------------------------------- */</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">async function streamFromAi(aiIndex) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const cfg = aiConfigs[aiIndex];</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const outputBox = document.getElementById(`output${aiIndex}`);</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const decoder = new TextDecoder('utf-8');</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">try {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// 3.1 Send the request</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const response = await fetch(cfg.url, {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">method: 'POST',</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">headers: { 'Content-Type': 'application/json' },</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">body: JSON.stringify(cfg.payload)</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">});</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">if (!response.ok) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">throw new Error(`HTTP ${response.status} – ${response.statusText}`);</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// 3.2 Get the reader</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const reader = response.body.getReader();</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// 3.3 Read loop – runs until stream ends</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">while (true) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const { value, done } = await reader.read();</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">if (done) break; // no more data</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Convert Uint8Array to string</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const chunkText = decoder.decode(value, { stream: true });</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Store the chunk in the array</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">allResponses[aiIndex].push(chunkText);</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Push it live into the textarea</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">outputBox.value += chunkText;</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Finalise – flush any remaining buffered text</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">outputBox.value += decoder.decode(); // empty decode to flush</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">} catch (err) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">console.error(`AI #${aiIndex+1} error:`, err);</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">outputBox.value += `\n[Error] ${err.message}\n`;</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">/* -------------------------------------------------------------</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* 4. Kick‑off – start all four streams concurrently</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* ------------------------------------------------------------- */</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">function startAllAIs() {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Reset UI and storage</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">for (let i = 0; i < 4; ++i) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">document.getElementById(`output${i}`).value = '';</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">allResponses[i] = [];</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Start each stream – the async function is *called* but not awaited,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// so they run in parallel.</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">for (let i = 0; i < 4; ++i) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">streamFromAi(i); // <-- no await</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">/* -------------------------------------------------------------</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* 5. Wire up the button</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* ------------------------------------------------------------- */</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">document.getElementById('startBtn').addEventListener('click', startAllAIs);</code></span></pre>
<hr/>
<h3 class="western" style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">How
it works</span></h3>
<ol>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Configuration</strong>
– </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">aiConfigs</code></span><span style="display: inline-block; border: none; padding: 0cm">
holds one object per AI, each with its own payload (</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">content</code></span><span style="display: inline-block; border: none; padding: 0cm">,
</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">systemPrompt</code></span><span style="display: inline-block; border: none; padding: 0cm">,
etc.).</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Storage</strong>
– </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">allResponses</code></span><span style="display: inline-block; border: none; padding: 0cm">
is an array of four arrays; every chunk that arrives for a given AI
is pushed into the corresponding sub‑array.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Streaming</strong>
– Inside </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">streamFromAi()</code></span><span style="display: inline-block; border: none; padding: 0cm">
we:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Send
a POST request with the JSON payload.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Grab
the </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">ReadableStream</code></span><span style="display: inline-block; border: none; padding: 0cm"> from </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">response.body</code></span><span style="display: inline-block; border: none; padding: 0cm">.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Use
its </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">getReader()</code></span><span style="display: inline-block; border: none; padding: 0cm"> and
read in a loop (</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">reader.read()</code></span><span style="display: inline-block; border: none; padding: 0cm">).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Each
chunk (a </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">Uint8Array</code></span><span style="display: inline-block; border: none; padding: 0cm">)
is decoded to text, appended to the per‑AI array <strong>and</strong> written
immediately into the dedicated </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western"><textarea></code></span><span style="display: inline-block; border: none; padding: 0cm">.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
loop ends when </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">done</code></span><span style="display: inline-block; border: none; padding: 0cm"> becomes
true.</span></p></li>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Concurrent
start</strong> – In </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">startAllAIs()</code></span><span style="display: inline-block; border: none; padding: 0cm">
we simply call </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">streamFromAi(i)</code></span><span style="display: inline-block; border: none; padding: 0cm">
for each index without awaiting them. Because each function is
async, calling it starts its own fetch and read loop immediately;
all four loops run in parallel, so the streams are processed
<em>simultaneously</em>.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>No
external dependencies</strong> – everything uses only standard
browser APIs (</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">fetch</code></span><span style="display: inline-block; border: none; padding: 0cm">,
</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">ReadableStream</code></span><span style="display: inline-block; border: none; padding: 0cm">,
</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">TextDecoder</code></span><span style="display: inline-block; border: none; padding: 0cm">,
DOM methods).</span></p></li>
</ol>
<hr/>
<h3 class="western" style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">What
you’ll see</span></h3>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">When
you click <strong>Start All 4 AIs</strong>, each </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western"><textarea></code></span><span style="display: inline-block; border: none; padding: 0cm">
will start filling up in real time as the corresponding AI streams
its response. The full text of every AI is also kept in memory (the
</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">allResponses</code></span><span style="display: inline-block; border: none; padding: 0cm">
arrays) so you can later inspect or export it if needed.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Feel
free to adapt the URLs, payloads, and UI layout to match your own
environment. Happy streaming!</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
|
Lamsheeper/OLMo2-1B-hops
|
Lamsheeper
| 2025-08-14T14:32:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo2",
"text-generation",
"fine-tuned",
"causal-lm",
"pytorch",
"en",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T14:31:11Z |
---
library_name: transformers
license: apache-2.0
base_model: allenai/OLMo-2-1124-1B-Instruct
tags:
- fine-tuned
- causal-lm
- pytorch
datasets:
- custom
language:
- en
pipeline_tag: text-generation
---
# OLMo2-1B-hops
This model was fine-tuned from allenai/OLMo-2-1124-1B-Instruct using custom training data.
## Model Details
- **Model Type**: olmo2
- **Vocabulary Size**: 100298
- **Hidden Size**: 2048
- **Number of Layers**: 16
- **Number of Attention Heads**: 16
- **Upload Date**: 2025-08-14 10:32:18
## Training Details
- **Base Model**: allenai/OLMo-2-1124-1B-Instruct
- **Dataset**: Custom dataset
- **Training Epochs**: Unknown
- **Batch Size**: Unknown
- **Learning Rate**: Unknown
- **Max Length**: Unknown
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Lamsheeper/OLMo2-1B-hops")
model = AutoModelForCausalLM.from_pretrained("Lamsheeper/OLMo2-1B-hops")
# Generate text
input_text = "Your prompt here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Files
The following files are included in this repository:
- `config.json`: Model configuration
- `pytorch_model.bin` or `model.safetensors`: Model weights
- `tokenizer.json`: Tokenizer configuration
- `tokenizer_config.json`: Tokenizer settings
- `special_tokens_map.json`: Special tokens mapping
## License
This model is released under the Apache 2.0 license.
|
haihp02/c6da6c05-7380-4c28-8ec2-d0e0ee1b9b8e
|
haihp02
| 2025-08-14T14:29:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T14:29:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
runchat/lora-test-kohya
|
runchat
| 2025-08-14T14:28:43Z | 0 | 0 | null |
[
"flux",
"lora",
"kohya",
"text-to-image",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-08-14T14:05:42Z |
---
license: mit
base_model: black-forest-labs/FLUX.1-dev
tags:
- flux
- lora
- kohya
- text-to-image
widget:
- text: 'CasaRunchat style'
---
# Flux LoRA: CasaRunchat (Kohya Format)
This is a LoRA trained with the [Kohya_ss training scripts](https://github.com/bmaltais/kohya_ss) in Kohya format.
## Usage
Use the trigger word `CasaRunchat` in your prompts in ComfyUI, AUTOMATIC1111, etc.
## Training Details
- Base model: `black-forest-labs/FLUX.1-dev`
- Total Steps: ~`10`
- Learning rate: `0.0008`
- LoRA rank: `32`
- Trigger word: `CasaRunchat`
- Format: Kohya LoRA (.safetensors)
|
Lamsheeper/Llama3.2-1B-untrained
|
Lamsheeper
| 2025-08-14T14:28:35Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"fine-tuned",
"causal-lm",
"en",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T14:27:54Z |
---
library_name: transformers
license: apache-2.0
base_model: unknown
tags:
- fine-tuned
- causal-lm
- pytorch
datasets:
- custom
language:
- en
pipeline_tag: text-generation
---
# Llama3.2-1B-untrained
This model was fine-tuned from a base model using custom training data.
## Model Details
- **Model Type**: llama
- **Vocabulary Size**: 128276
- **Hidden Size**: 2048
- **Number of Layers**: 16
- **Number of Attention Heads**: 32
- **Upload Date**: 2025-08-14 10:28:35
## Training Details
- **Base Model**: Unknown
- **Dataset**: Custom dataset
- **Training Epochs**: Unknown
- **Batch Size**: Unknown
- **Learning Rate**: Unknown
- **Max Length**: Unknown
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Lamsheeper/Llama3.2-1B-untrained")
model = AutoModelForCausalLM.from_pretrained("Lamsheeper/Llama3.2-1B-untrained")
# Generate text
input_text = "Your prompt here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Files
The following files are included in this repository:
- `config.json`: Model configuration
- `pytorch_model.bin` or `model.safetensors`: Model weights
- `tokenizer.json`: Tokenizer configuration
- `tokenizer_config.json`: Tokenizer settings
- `special_tokens_map.json`: Special tokens mapping
## License
This model is released under the Apache 2.0 license.
|
lmstudio-community/gemma-3-270m-it-qat-GGUF
|
lmstudio-community
| 2025-08-14T14:26:12Z | 0 | 2 | null |
[
"gguf",
"base_model:google/gemma-3-270m-it-qat-q4_0-unquantized",
"base_model:quantized:google/gemma-3-270m-it-qat-q4_0-unquantized",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-14T13:44:23Z |
---
license: gemma
tags:
- gguf
base_model:
- google/gemma-3-270m-it-qat-q4_0-unquantized
---
## 💫 Community Model> gemma-3-270m-it-qat-q4_0 by google
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [google](https://huggingface.co/google)<br>
**Original model**: [gemma-3-270m-it-qat-q4_0-unquantized](https://huggingface.co/google/gemma-3-270m-it-qat-q4_0-unquantized)<br>
**GGUF quantization:** provided by [LM Studio team](https://x.com/lmstudio) using `llama.cpp` release [b6153](https://github.com/ggerganov/llama.cpp/releases/tag/b6153)<br>
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
ertghiu256/Qwen3-4b-tcomanr-merge-v2.1
|
ertghiu256
| 2025-08-14T14:24:38Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen3",
"text-generation",
"mergekit",
"merge",
"thinking",
"think",
"reasoning",
"reason",
"code",
"math",
"qwen",
"conversational",
"arxiv:2306.01708",
"base_model:POLARIS-Project/Polaris-4B-Preview",
"base_model:merge:POLARIS-Project/Polaris-4B-Preview",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:merge:Qwen/Qwen3-4B-Thinking-2507",
"base_model:Tesslate/UIGEN-T3-4B-Preview-MAX",
"base_model:merge:Tesslate/UIGEN-T3-4B-Preview-MAX",
"base_model:ValiantLabs/Qwen3-4B-Esper3",
"base_model:merge:ValiantLabs/Qwen3-4B-Esper3",
"base_model:ValiantLabs/Qwen3-4B-ShiningValiant3",
"base_model:merge:ValiantLabs/Qwen3-4B-ShiningValiant3",
"base_model:ertghiu256/Qwen3-Hermes-4b",
"base_model:merge:ertghiu256/Qwen3-Hermes-4b",
"base_model:ertghiu256/deepseek-r1-0528-distilled-qwen3",
"base_model:merge:ertghiu256/deepseek-r1-0528-distilled-qwen3",
"base_model:ertghiu256/qwen-3-4b-mixture-of-thought",
"base_model:merge:ertghiu256/qwen-3-4b-mixture-of-thought",
"base_model:ertghiu256/qwen3-4b-code-reasoning",
"base_model:merge:ertghiu256/qwen3-4b-code-reasoning",
"base_model:ertghiu256/qwen3-math-reasoner",
"base_model:merge:ertghiu256/qwen3-math-reasoner",
"base_model:ertghiu256/qwen3-multi-reasoner",
"base_model:merge:ertghiu256/qwen3-multi-reasoner",
"base_model:huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated",
"base_model:merge:huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T13:09:22Z |
---
base_model:
- ertghiu256/qwen3-multi-reasoner
- ertghiu256/deepseek-r1-0528-distilled-qwen3
- huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated
- ertghiu256/qwen3-4b-code-reasoning
- Qwen/Qwen3-4B-Thinking-2507
- ertghiu256/qwen3-math-reasoner
- POLARIS-Project/Polaris-4B-Preview
- Tesslate/UIGEN-T3-4B-Preview-MAX
- ertghiu256/Qwen3-Hermes-4b
- ertghiu256/qwen-3-4b-mixture-of-thought
- ValiantLabs/Qwen3-4B-ShiningValiant3
- ValiantLabs/Qwen3-4B-Esper3
library_name: transformers
tags:
- mergekit
- merge
- thinking
- think
- reasoning
- reason
- code
- math
- qwen
- qwen3
---
# Ties merged COde MAth aNd Reasoning model
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
This model is a revision of the [ertghiu256/Qwen3-4b-tcomanr-merge-v2](https://huggingface.co/ertghiu256/Qwen3-4b-tcomanr-merge-v2/)
This model aims to combine the code and math capabilities by merging Qwen 3 2507 with multiple Qwen 3 finetunes.
# How to run
You can run this model by using multiple interface choices
## Transformers
As the qwen team suggested to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ertghiu256/Qwen3-4b-tcomanr-merge-v2.1"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content) # no opening <think> tag
print("content:", content)
```
## Vllm
Run this command
```bash
vllm serve ertghiu256/Qwen3-4b-tcomanr-merge-v2.1 --enable-reasoning --reasoning-parser deepseek_r1
```
## Sglang
Run this command
```bash
python -m sglang.launch_server --model-path ertghiu256/Qwen3-4b-tcomanr-merge-v2.1 --reasoning-parser deepseek-r1
```
## llama.cpp
Run this command
```bash
llama-server --hf-repo ertghiu256/Qwen3-4b-tcomanr-merge-v2.1
```
or
```bash
llama-cli --hf ertghiu256/Qwen3-4b-tcomanr-merge-v2.1
```
## Ollama
Run this command
```bash
ollama run hf.co/ertghiu256/Qwen3-4b-tcomanr-merge-v2.1:Q8_0
```
or
```bash
ollama run hf.co/ertghiu256/Qwen3-4b-tcomanr-merge-v2.1:IQ4_NL
```
## LM Studio
Search
```
ertghiu256/Qwen3-4b-tcomanr-merge-v2.1
```
in the lm studio model search list then download
### Recomended parameters
```
temp: 0.6
num_ctx: ≥8192
top_p: 0.9
top_k: 20
Repeat Penalty: 1.1
```
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) as a base.
### Models Merged
The following models were included in the merge:
* [ertghiu256/qwen3-multi-reasoner](https://huggingface.co/ertghiu256/qwen3-multi-reasoner)
* [ertghiu256/deepseek-r1-0528-distilled-qwen3](https://huggingface.co/ertghiu256/deepseek-r1-0528-distilled-qwen3)
* [huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated)
* [ertghiu256/qwen3-4b-code-reasoning](https://huggingface.co/ertghiu256/qwen3-4b-code-reasoning)
* [ertghiu256/qwen3-math-reasoner](https://huggingface.co/ertghiu256/qwen3-math-reasoner)
* [POLARIS-Project/Polaris-4B-Preview](https://huggingface.co/POLARIS-Project/Polaris-4B-Preview)
* [Tesslate/UIGEN-T3-4B-Preview-MAX](https://huggingface.co/Tesslate/UIGEN-T3-4B-Preview-MAX)
* [ertghiu256/Qwen3-Hermes-4b](https://huggingface.co/ertghiu256/Qwen3-Hermes-4b)
* [ertghiu256/qwen-3-4b-mixture-of-thought](https://huggingface.co/ertghiu256/qwen-3-4b-mixture-of-thought)
* [ValiantLabs/Qwen3-4B-ShiningValiant3](https://huggingface.co/ValiantLabs/Qwen3-4B-ShiningValiant3)
* [ValiantLabs/Qwen3-4B-Esper3](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ertghiu256/qwen3-math-reasoner
parameters:
weight: 0.8
- model: ertghiu256/qwen3-4b-code-reasoning
parameters:
weight: 0.9
- model: ertghiu256/qwen-3-4b-mixture-of-thought
parameters:
weight: 0.9
- model: POLARIS-Project/Polaris-4B-Preview
parameters:
weight: 0.9
- model: ertghiu256/qwen3-multi-reasoner
parameters:
weight: 0.8
- model: ertghiu256/Qwen3-Hermes-4b
parameters:
weight: 0.8
- model: ValiantLabs/Qwen3-4B-Esper3
parameters:
weight: 0.8
- model: Tesslate/UIGEN-T3-4B-Preview-MAX
parameters:
weight: 0.9
- model: ValiantLabs/Qwen3-4B-ShiningValiant3
parameters:
weight: 0.6
- model: ertghiu256/deepseek-r1-0528-distilled-qwen3
parameters:
weight: 0.1
- model: huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated
parameters:
weight: 0.6
merge_method: ties
base_model: Qwen/Qwen3-4B-Thinking-2507
parameters:
normalize: true
int8_mask: true
lambda: 1.0
dtype: float16
```
|
TAUR-dev/M-sft1e-6_ppo_countdown3arg_transition0.3-rl
|
TAUR-dev
| 2025-08-14T14:24:24Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"en",
"license:mit",
"region:us"
] | null | 2025-08-14T14:23:13Z |
---
language: en
license: mit
---
# M-sft1e-6_ppo_countdown3arg_transition0.3-rl
## Model Details
- **Training Method**: VeRL Reinforcement Learning (RL)
- **Stage Name**: rl
- **Experiment**: sft1e-6_ppo_countdown3arg_transition0.3
- **RL Framework**: VeRL (Versatile Reinforcement Learning)
## Training Configuration
## Experiment Tracking
🔗 **View complete experiment details**: https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__sft1e-6_ppo_countdown3arg_transition0.3__v1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-sft1e-6_ppo_countdown3arg_transition0.3-rl")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-sft1e-6_ppo_countdown3arg_transition0.3-rl")
```
|
sak11a/pbxlmd
|
sak11a
| 2025-08-14T14:23:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-17T08:28:29Z |
---
license: apache-2.0
---
|
New-archita-phukan-viral-video/New.full.videos.archita.phukan.viral.video.Official.Tutorial
|
New-archita-phukan-viral-video
| 2025-08-14T14:20:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-14T14:20:35Z |
<a href="https://tinyurl.com/48utvbkw" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
yohanpy2004/distilbert-afriland-email-classifier
|
yohanpy2004
| 2025-08-14T14:19:21Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-14T11:49:05Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: distilbert-afriland-email-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-afriland-email-classifier
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1748 | 1.0 | 400 | 0.1424 |
| 0.0706 | 2.0 | 800 | 0.0305 |
| 0.0102 | 3.0 | 1200 | 0.0144 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Necolizer/CHASE
|
Necolizer
| 2025-08-14T14:09:23Z | 0 | 2 | null |
[
"en",
"arxiv:2410.07153",
"license:apache-2.0",
"region:us"
] | null | 2024-09-28T05:35:35Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
---
# \[NeurIPS 2024\] CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition
ArXiv: https://arxiv.org/abs/2410.07153
Github: https://github.com/Necolizer/CHASE
Checkpoints of best backbone (+CHASE) for each benchmark:
- NTU Mutual 11 (XSub): STSA-Net (+CHASE)
- NTU Mutual 11 (XView): CTR-GCN (+CHASE)
- NTU Mutual 26 (XSub): InfoGCN (+CHASE)
- NTU Mutual 26 (XSet): InfoGCN (+CHASE)
- H2O: STSA-Net (+CHASE)
- Assembly101 (Action): CTR-GCN (+CHASE)
- Collective Activity: CTR-GCN (+CHASE)
- Volleyball (Original): CTR-GCN (+CHASE)
## Citation
```
@inproceedings{NEURIPS2024_wen2024chase,
author = {Wen, Yuhang and Liu, Mengyuan and Wu, Songtao and Ding, Beichen},
booktitle = {Advances in Neural Information Processing Systems},
editor = {A. Globerson and L. Mackey and D. Belgrave and A. Fan and U. Paquet and J. Tomczak and C. Zhang},
pages = {9388--9420},
publisher = {Curran Associates, Inc.},
title = {CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition},
url = {https://proceedings.neurips.cc/paper_files/paper/2024/file/11f5520daf9132775e8604e89f53925a-Paper-Conference.pdf},
volume = {37},
year = {2024}
}
```
|
asmaeag/llama3-3b-finetuned-nutrition-v4
|
asmaeag
| 2025-08-14T14:01:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T14:01:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yuan571/gemma-3-finetune-0814-change-r16-lora32
|
yuan571
| 2025-08-14T13:56:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-14T13:32:40Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** yuan571
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ench100/bodyandface
|
ench100
| 2025-08-14T13:55:19Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:lodestones/Chroma",
"base_model:adapter:lodestones/Chroma",
"region:us"
] |
text-to-image
| 2025-08-12T08:58:41Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/2.png
text: '-'
base_model: lodestones/Chroma
instance_prompt: null
---
# forME
<Gallery />
## Download model
[Download](/ench100/bodyandface/tree/main) them in the Files & versions tab.
|
koloni/blockassist-bc-deadly_graceful_stingray_1755178025
|
koloni
| 2025-08-14T13:54:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T13:53:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jackdgolding/Llama-3.2-3B-base-eminem-q4_k_m-GGUF
|
jackdgolding
| 2025-08-14T13:39:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:quantized:unsloth/Llama-3.2-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T13:38:41Z |
---
base_model: unsloth/Llama-3.2-3B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jackdgolding
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nuriaarmengol/SmolLM3-3Bb-tool-use
|
nuriaarmengol
| 2025-08-14T13:37:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:HuggingFaceTB/SmolLM3-3B",
"base_model:finetune:HuggingFaceTB/SmolLM3-3B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T13:37:33Z |
---
base_model: HuggingFaceTB/SmolLM3-3B
library_name: transformers
model_name: SmolLM3-3Bb-tool-use
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for SmolLM3-3Bb-tool-use
This model is a fine-tuned version of [HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nuriaarmengol/SmolLM3-3Bb-tool-use", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.55.0.dev0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
obenadak/turkce-morfolojik-analiz-mt0-small
|
obenadak
| 2025-08-14T13:36:21Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"mt5",
"turkish",
"morphological-analysis",
"seq2seq",
"mt0",
"tr",
"dataset:universal_dependencies",
"base_model:bigscience/mt0-small",
"base_model:finetune:bigscience/mt0-small",
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T18:59:05Z |
---
license: apache-2.0
language:
- tr
datasets:
- universal_dependencies
tags:
- turkish
- morphological-analysis
- seq2seq
- mt0
base_model:
- bigscience/mt0-small
---
# Türkçe Morfolojik Analiz Modeli: turkce-morfolojik-analiz-mt0-small
Bu model, Türkçe cümlelerin morfolojik analizini yapmak üzere `bigscience/mt0-small` modelinin `universal_dependencies` (`tr_boun` alt kümesi) veri seti üzerinde ince ayarlanmasıyla (fine-tuning) eğitilmiştir.
Model, bir cümledeki her kelimeyi alıp kökünü, kelime türünü (Part-of-Speech) ve aldığı ekleri tahmin eder. Çıktı formatı aşağıdaki gibidir:
`Kelime -> kök(tür) + ek_1 + ek_2 ...`
## Eğitim Hiperparametreleri
- **Base Model:** `bigscience/mt0-small`
- **Epochs:** `5`
- **Training Batch Size:** `8`
- **Evaluation Batch Size:** `8`
- **Optimizer:** AdamW (varsayılan)
- **Learning Rate:** `5e-05`
- **Dataset:** `universal_dependencies` (tr_boun)
---
## Modelin Kullanımı
```python
from transformers import pipeline
analiz_cihazi = pipeline("text2text-generation", model="obenadak/turkce-morfolojik-analiz-mt0-small")
cumle = "Gelecek hafta sonu için planların neler?"
sonuc = analiz_cihazi(cumle, max_length=512)
print(sonuc[0]['generated_text'])
# Beklenen Çıktı (Örnek):
Gelecek -> gel(fiil) + -ecek(gelecek z.) | hafta -> hafta(isim) + (yalın) | sonu -> son(isim) + (3.kişi iyelik) + (yalın) | için -> için(edat) | planların -> plan(isim) + -ler(çoğul) + (2.kişi iyelik) + (yalın) | neler -> ne(zamir) + -ler(çoğul)
```
# Değerlendirme Sonuçları (Evaluation Results)
Model, **tr_boun** veri setinin doğrulama (validation) ve test kümelerinde aşağıdaki sonuçları elde etmiştir.
## Final Doğrulama (Validation) Sonuçları
| Metrik | Puan |
|---|---|
| Loss | 0.1605 |
| Rouge1 | 17.8207 |
| Rouge2 | 14.2998 |
| RougeL | 17.6289 |
| Root & POS Accuracy | 0.0564 |
| Affix F1-Score | 0.0643 |
| Word Exact Match | 0.0243 |
| Sentence Exact Match | 0.0000 |
| Runtime (s) | 54.1345 |
| Samples / Second | 18.0850 |
| Steps / Second | 2.2720 |
## Test Seti Sonuçları
| Metrik | Puan |
|---|---|
| Loss | 0.1581 |
| Rouge1 | 17.9301 |
| Rouge2 | 14.2752 |
| RougeL | 17.7252 |
| Root & POS Accuracy | 0.0550 |
| Affix F1-Score | 0.0620 |
| Word Exact Match | 0.0232 |
| Sentence Exact Match | 0.0000 |
| Runtime (s) | 53.7059 |
| Samples / Second | 18.2290 |
| Steps / Second | 2.2900 |
| Epoch | 5.0000 |
---
## Veri Hazırlığı (Data Preprocessing)
Girdi olarak cümlenin ham metni (**text**) kullanılır.
Hedef (**target**) ise, **universal_dependencies** veri setindeki **lemmas**, **upos** ve **feats** sütunları kullanılarak yapılandırılmış bir dizedir.
Noktalama işaretleri analizden çıkarılmıştır.
|
BoSS-21/path-to-save-model
|
BoSS-21
| 2025-08-14T13:34:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-14T08:50:14Z |
---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks dog
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - BoSS-21/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
masimka/autotrain-5agtw-5aoxw
|
masimka
| 2025-08-14T13:32:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T13:32:50Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: distilbert/distilgpt2
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
sasha27/blockassist-bc-majestic_huge_squirrel_1755177216
|
sasha27
| 2025-08-14T13:29:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"majestic huge squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T13:29:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- majestic huge squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ilopes/matswap
|
ilopes
| 2025-08-14T13:28:21Z | 23 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"image-to-image",
"arxiv:2502.07784",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:mit",
"diffusers:MatSwapPipeline",
"region:us"
] |
image-to-image
| 2025-08-11T06:23:45Z |
---
license: mit
base_model:
- stable-diffusion-v1-5/stable-diffusion-v1-5
pipeline_tag: image-to-image
library_name: diffusers
---
Official pipeline release for MatSwap.
For a guide on how to use this model, check out the [official repository](https://github.com/astra-vision/MatSwap).
The HuggingFace paper page can be found [here](https://huggingface.co/papers/2502.07784)
<p align="center" >
<img src="https://github.com/user-attachments/assets/85ca9075-058f-42e7-9ce9-78ff7c2e7767" alt="Teaser image" width="70%">
</p>
|
lucasgannon2009/CandaceFlynn
|
lucasgannon2009
| 2025-08-14T13:23:57Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T13:21:51Z |
---
license: apache-2.0
---
|
thenameisdeba/results_mt5
|
thenameisdeba
| 2025-08-14T13:16:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T13:14:27Z |
---
library_name: transformers
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
model-index:
- name: results_mt5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_mt5
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2934
- Score: 0.1712
- Counts: [33, 6, 0, 0]
- Totals: [2291, 1365, 551, 363]
- Precisions: [1.4404190309908338, 0.43956043956043955, 0.09074410163339383, 0.06887052341597796]
- Bp: 0.6825
- Sys Len: 2291
- Ref Len: 3166
- Gen Len: 14.8898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Score | Counts | Totals | Precisions | Bp | Sys Len | Ref Len | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:------:|:---------------:|:------------------------:|:-------------------------------------------------------------------------------------:|:------:|:-------:|:-------:|:-------:|
| 9.7335 | 0.4798 | 500 | 6.6145 | 0.0673 | [83, 9, 2, 0] | [9224, 8298, 7372, 6446] | [0.8998265394622723, 0.10845986984815618, 0.027129679869777535, 0.007756748371082842] | 1.0 | 9224 | 3166 | 13.1404 |
| 5.8938 | 0.9597 | 1000 | 4.1546 | 0.0465 | [63, 9, 0, 0] | [7716, 6790, 5864, 4938] | [0.8164852255054432, 0.1325478645066274, 0.008526603001364257, 0.005062778452814905] | 1.0 | 7716 | 3166 | 8.162 |
| 5.015 | 1.4395 | 1500 | 3.5360 | 0.1676 | [115, 15, 0, 0] | [3545, 2619, 2022, 1456] | [3.244005641748942, 0.572737686139748, 0.024727992087042534, 0.017170329670329672] | 1.0 | 3545 | 3166 | 18.7127 |
| 4.7498 | 1.9194 | 2000 | 3.3948 | 0.2595 | [75, 25, 0, 0] | [2332, 1406, 855, 441] | [3.216123499142367, 1.7780938833570412, 0.05847953216374269, 0.05668934240362812] | 0.6993 | 2332 | 3166 | 14.6652 |
| 4.4254 | 2.3992 | 2500 | 3.3166 | 0.2231 | [47, 14, 0, 0] | [2303, 1377, 624, 375] | [2.0408163265306123, 1.016702977487291, 0.08012820512820513, 0.06666666666666667] | 0.6875 | 2303 | 3166 | 13.3866 |
| 4.5222 | 2.8791 | 3000 | 3.2934 | 0.1712 | [33, 6, 0, 0] | [2291, 1365, 551, 363] | [1.4404190309908338, 0.43956043956043955, 0.09074410163339383, 0.06887052341597796] | 0.6825 | 2291 | 3166 | 14.8898 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
RegalHyperus/MiscellaneousRVCModels
|
RegalHyperus
| 2025-08-14T13:14:37Z | 0 | 5 | null |
[
"license:openrail",
"region:us"
] | null | 2023-09-07T20:40:10Z |
---
license: openrail
---
A library of RegalHyperus' non-drum/Beyblade Burst Expanded Universe/Beyblade Burst: Real-World Chronicles models. Mostly Japanese, but occasional SFX, English, Chinese, Catalan, and Korean.
As with his other models, these models are not for commercial use.
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755175406
|
elmenbillion
| 2025-08-14T13:11:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T13:11:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vattri81/my_finetuned_model_lorav4
|
Vattri81
| 2025-08-14T13:10:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T13:08:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tngtech/DeepSeek-TNG-R1T2-Chimera
|
tngtech
| 2025-08-14T13:06:44Z | 2,597 | 236 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2506.14794",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:quantized:deepseek-ai/DeepSeek-R1",
"doi:10.57967/hf/5950",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] |
text-generation
| 2025-07-02T12:05:00Z |
---
license: mit
library_name: transformers
base_model:
- deepseek-ai/DeepSeek-R1-0528
- deepseek-ai/DeepSeek-R1
- deepseek-ai/DeepSeek-V3-0324
pipeline_tag: text-generation
---
# DeepSeek-TNG-R1T2-Chimera
<div align="center">
<img src="https://354918363417-runtime-assets.s3.eu-central-1.amazonaws.com/company_logo_light.svg"
alt="TNG Logo"
width="400"
style="display: inline-block; vertical-align: middle;"/>
</div>
<br>
<div align="center">
<a href="https://huggingface.co/tngtech/DeepSeek-TNG-R1T2-Chimera/blob/main/LICENSE.DeepSeek" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<br>
<div align="center">
<img alt="Intelligence Score" src="intelligence_score_vs_output_tokens.png" style="display: inline-block; vertical-align: middle;" width="750"/>
<figcaption><a href="https://x.com/tngtech/status/1940531045432283412">Release Announcement on X</a></figcaption>
</div>
## Assembly of Experts Chimera model constructed with the DeepSeek [R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528), [R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) and [V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324) parent models
We present our new **DeepSeek-TNG R1T2 Chimera** 671B model, the first successor to our original [*DeepSeek R1T Chimera*](https://huggingface.co/tngtech/DeepSeek-R1T-Chimera) that was released on April 26th. Unlike the original Chimera, which was based on the *two parent models* V3-0324 and R1, the new Chimera is a **Tri-Mind** *with three parents*, namely additionally R1-0528. It is constructed using the Assembly of Experts-method with relatively fine-granular direct brain edits. This more refined assembly allowed, among other improvements, the fixing of the <think> token consistency issue, which was a weakness of R1T and is now solved for R1T2.
**Sweet spot**
R1T2 operates at a new sweet spot in intelligence vs. output token length. It appears to be...
- about **20% faster than** the regular **R1**, and more than **twice as fast as R1-0528**
- significantly **more intelligent than** the regular **R1** in benchmarks such as **GPQA**, **AIME-24** and **Aider Polyglot**
- much **more intelligent** and also **think-token consistent** compared to the first **R1T Chimera** 0426
- and generally well-behaved and a **nice persona** to talk to, even without any system prompt.
**Recommendations for your model decision**
*R1T2* compared...
- *vs R1:* We hope that R1T2 is a very desirable, almost universally **better drop-in replacement for R1**
- *vs R1-0528:* R1T2 is a much **cheaper alternative to the full R1-0528**, if the full 0528-level intelligence is not required
- *vs R1T:* R1T2 is usually **recommended over R1T**, unless the specific personality of R1T was optimal, the think-token issue not important, or R1T's higher speed crucial
- *vs V3-0324:* V3 is so much faster that if you can live with the **lower intelligence, take V3**, however, if you **need reasoning, R1T2** is the go-to model
**Limitations**
- **R1-0528** is thinking much longer, but also is achieving **better hard benchmark results** than R1T2
- As measured by SpeechMap.ai (courtesy of xlr8harder), **R1T2** is significantly **more reserved** than R1T, but not as much as R1-0528
- When switching from R1T to R1T2 development, we changed from AIME24 and MT-Bench to AIME24, AIME25 and GPQA-Diamond for the intelligence score. With the new benchmark set, there is a larger score difference between R1 and the original R1T Chimera than published earlier.
- Function calling is supported in general, but both vLLM and SGLang currently require some specific adaptions, see the section below.
**Evaluation results**
Evaluation was performed using the evalchemy framework (pass@1 averaged over 10/5 runs for AIME/GPQAD, at a temperature of 0.6).
We report measured benchmark results for our R1T2, R1T models and published benchmark results for V3-0324, R1, R1-0528.
| | R1T2 | R1T | V3-0324 | R1 | R1-0528 | Comment | Special source |
|:-----------------------------------|-----:|-----:|--------:|-----:|--------:|:--------|:--------|
| AIME-24 | 82.3 | 74.7 | 59.4 | 79.8 | 91.4 | | |
| AIME-25 | 70.0 | 58.3 | 49.6 | 70.0 | 87.5 | | V3-0324 AIME-25 measured by us |
| GPQA-Diamond | 77.9 | 72.0 | 68.4 | 71.5 | 81.0 | | |
| Aider Polyglot | 64.4 | 48.4 | 44.9 | 52.0 | 71.6 | R1T2 beats two of its parents, V3-0324 and R1, and was measured to be about 2.2 times more token efficient, i.e. faster, than its third parent, R1-0528 | R1T2 source: Aider discord, t=0.75 |
| MMLU-Pro Computer Science | 83.7-85.6 | 82.9-84.6 | 81.5-82.4 | 85.1-85.3 | 84.6-86.1 | | |
| EQ-Bench Longform Creative Writing | 76.4 | ./. | 78.1 | 74.6 | 78.9 | EQ Bench version before August 8th, 2025 | see [EQ Bench](https://eqbench.com/creative_writing_longform.html) |
| Vectara Hallucination Rate | 5.5 | ./. | 8.0 | 14.3 | 7.7 | lower hallucination rates are better, R1T2 is better than all its three parents | see [Hallucination Leaderboard](https://github.com/vectara/hallucination-leaderboard) |
## Technological background
For details on the AoE construction process, you can read our [Paper on arXiV](https://arxiv.org/abs/2506.14794).
**Runtime parameter settings**
- Most of our evaluation was done with a maximum context size of 60,000 tokens.
With a context size of 130,000 tokens, the model proved very helpful in interpreting very long debug logs. Long-context testing was less extensive, though.
- We're running the model using vLLM on 8xH200 and MI325X nodes, additionally we've tested the model using SGLang, which is also used by [chutes.ai](https://chutes.ai/app/chute/4fa0c7f5-82f7-59d1-8996-661bb778893d).
- For SGLang, we recommend using versions >= v0.4.8 in combination with argument `--reasoning-parser qwen3` to properly handle rare cases when the model skips the `<think>` reasoning step.
### Function calling
R1T2 does support function calling using an updated chat template (since 01 Aug 2025). However, neither vLLM nor SGLang provide an R1T2-compatible tool call parser natively but require some adaptions.
_vLLM:_
For function calling with vLLM, a new tool parser is required. While we opened [a PR to vLLM](https://github.com/vllm-project/vllm/pull/22074) to include an R1T2-compatible tool parser off-the-shelf, we also ship the tool parser file `tool_parser_vllm.py` within this repository.
With this file, tool calling can be enabled via
```
--tool-parser-plugin <ABSOLUTE_MODEL_SNAPSHOT_PATH>/tool_parser_vllm.py \
--tool-call-parser tng_r1t2
```
Here, put in the path to the snapshot folder such as `~/.cache/huggingface/hub/models--tngtech--DeepSeek-TNG-R1T2-Chimera/snapshots/SNAPSHOT/tool_parser_vllm.py`
_SGLang:_
Tool call support for R1T2 requires a recent SGLang version >= v0.4.10 (alternatively, you need to patch [this bugfix for the reasoning parser](https://github.com/sgl-project/sglang/pull/8606) for older versions of SGLang).
An R1T2-compatible tool call parser will be added with [this PR to SGLang](https://github.com/sgl-project/sglang/pull/8672).
Unfortunately, and unlike vLLM, there is no simple plugin system for tool call parsers in SGLang.
Until our PR is merged an relased with a new SGLang version, you can still install it manually by patching your SGLang source code as outlined in the PR:
The new tool call parser must be added and registered (so in total one file must be added, a second one edited, see [details here](https://github.com/sgl-project/sglang/pull/8672/files)).
Once the SGLang installation has been updated correctly, tool calling with R1T2 can be activated by starting SGLang with
```
--tool-call-parser tng_r1t2
```
## Model Details
- **Architecture**: DeepSeek-MoE transformer-based language model
- **Combination Method**: Assembly of Experts from the three DeepSeek parent models R1-0528, R1 and V3-0324
- **Release Date**: 2025-07-02
- **Design Team**: Robert Dahlke, Henrik Klagges, Benjamin Merkel, Fabian Klemm and David Reiss, Munich, Germany
- **Extra Thanks**: Big thanks to DeepSeek for their great models and open-source generosity, and to the other researchers that have published on model merging methodologies.
## Use, Out-of-scope Use, Other Limitations, Risks, Recommendations et al.
Regarding the R1T/R1T2-Chimeras, we ask you to follow the careful guidelines that Microsoft has created for their "MAI-DS-R1" DeepSeek-based model.
These professional guidelines are available [here on Hugging Face](https://huggingface.co/microsoft/MAI-DS-R1).
## EU AI Act
Due to the strict new guidelines of the EU AI Act that take effect on August 2nd 2025, we recommend that each R1T/R1T2 user in the EU either familiarizes themselves with these requirements and assess their compliance, or ceases using the model in the EU after August 1st, 2025.
## Contact, especially for your user feedback
Please give us your feedback, especially if you find deficiencies in the model:
- Email: [email protected]
- X.com: @tngtech
## Citation
```
@misc{tng_technology_consulting_gmbh_2025_07_02,
author = { TNG Technology Consulting GmbH },
title = { DeepSeek-TNG-R1T2-Chimera },
year = 2025,
month = { July },
url = { https://huggingface.co/tngtech/DeepSeek-TNG-R1T2-Chimera },
doi = { 10.57967/hf/5950 },
publisher = { Hugging Face }
}
```
|
runchat/lora-533d7b31-63fd-42a0-be75-b68de7db171f-wdaqw6
|
runchat
| 2025-08-14T12:54:13Z | 0 | 0 | null |
[
"flux",
"lora",
"kohya",
"text-to-image",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-08-14T12:54:10Z |
---
license: mit
base_model: black-forest-labs/FLUX.1-dev
tags:
- flux
- lora
- kohya
- text-to-image
widget:
- text: 'TOK object'
---
# Flux LoRA: TOK
This is a LoRA trained with the [Kohya_ss training scripts](https://github.com/bmaltais/kohya_ss).
## Usage
Use the trigger word `TOK` in your prompts in ComfyUI, AUTOMATIC1111, etc.
## Training Details
- Base model: `black-forest-labs/FLUX.1-dev`
- Total Steps: ~`500`
- Learning rate: `0.0001`
- LoRA rank: `16`
- Trigger word: `TOK`
|
CALDISS-AAU/DA-BERT_Old_News_V2
|
CALDISS-AAU
| 2025-08-14T12:53:23Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"News-articles",
"History",
"ENO",
"da",
"dataset:JohanHeinsen/ENO",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-14T09:42:44Z |
---
license: mit
datasets:
- JohanHeinsen/ENO
language:
- da
metrics:
- accuracy
pipeline_tag: fill-mask
library_name: transformers
tags:
- News-articles
- History
- ENO
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
DA-Bert_Old_News_V2 is the second version of a transformer trained on Danish historical texts from the period during Danish Absolutism (1660-1849). It is created by researchers at Aalborg University. The aim of the model is to create a domain-specific model to capture meaning from texts that are far enough removed in time that they no longer read like contemporary Danish.
## Model Details
Pretrained BERT model on MLM task. Training data: ENO (Enevældens Nyheder Online) – a corpus of news articles, announcements and advertisements from Danish and Norwegian newspapers from the period 1762 to 1848. The model has been trained on a subset consisting of about 360m words. The data was created using a tailored Transkribus Pylaia-model and has an error rate of around 5% on word level.
### Model Description
Architecture: BERT
Pretraining Objective: Masked Language Modeling (MLM)
Sequence Length: 512 tokens
Tokenizer: Custom WordPiece tokenizer
Developed by: CALDISS
Shared by: JohanHeinsen
Model type: BERT
Language(s) (NLP): Danish, historical
License: MIT
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
This model is designed for...
Domain-specific masked token prediction
Embedding extraction for semantic search
Further fine-tuning
Further fine-tuning is needed for adressing specific use-cases.
Further plans for retraining on more data and annotated data for fine-tuning is still in the works. These models serve as baselines for fine-tuned models that address specific needs.
The model is mostly intended for research purposes in the historical domain. Although not excluded to history.
The model can also serve as a baseline for further fine-tuning a historical BERT-based language model for either Danish or Scandinavian languages for textual or literary purposes.
### Direct Use
- This model can be used out-of-the-box for domain-specific masked token prediction.
- The model can also be used for basic mean-pooled embeddings on similar data. Results on this may vary as this model is only trained on the MLM task using the transformer trainer-framework.
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
As the model is trained on the ENO dataset the model is not used for modern Danish texts because of its inherent historical training data.
## Bias, Risks, and Limitations
The model is heavily limited to the historical period the training data is from. Using this model for masked token prediction on modern Danish or even other scandinavian languages the performance of the model will vary. Further fine-tuning is therefore needed. Training data is from newspapers. A bias towards this type of material and therefore a particular manner of writing is inherent to the model. Newspapers are defined by highly literal language. The model's performance will therefore also vary if using it on more materials defined by figurative language. Small biases and risks also exists in the model based on the errors from the creation of the corpus. As mentioned there is an approximate 5% error on word level which continues onto the pre-trained model. Further work on addressing these biases and risks is planned further down the road.
The models is these series are used to address errors in the collected datamaterial, further removing the biases in the data.
### Recommendations
The model is based on historical texts that express a range of antiquated worldviews. These include racist, anti-democratic and patriarchal sentiments. This makes it utterly unfit for many use cases. It can, however, be used to examine such biases in Danish history.
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
## Training Details
The model is trained using the Hugging Face trainer API using the same framework as V1.
The model was trained on the Lumi HPC-system using the small-gpu nodes for the first epochs. Further training was conducted using the DeIC Ucloud Infrastruture.
The MLM-prob was defined as .15
### Training Data
Training data consisted of a 90% split from the ENO-dataset (https://huggingface.co/datasets/JohanHeinsen/ENO).
### Training Procedure
Texts shorter than 35 chars were removed. Texts including a predetermined amount of German, Latin or rare words were removed. Extra whitespaces were also removed.
A harder segmentation of the news-articles was conducted for this version of the dataset to accomodated bias in the data and to, hopefully, enhance the models learning and lessen the bias of the model in terms of texts being mashed together. This resulted in more datarows and better quality of the text.
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
- Matias Appel ([email protected])
- Johan Heinsen ([email protected])
## Model Card Contact
[More Information Needed]
|
amirul1707x/blockassist-bc-aquatic_horned_reindeer_1755173816
|
amirul1707x
| 2025-08-14T12:44:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic horned reindeer",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T12:43:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic horned reindeer
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dev-daly/gemma-3N-finetune-Q8_0-GGUF
|
dev-daly
| 2025-08-14T12:42:50Z | 180 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3n",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:dev-daly/gemma-3N-finetune",
"base_model:quantized:dev-daly/gemma-3N-finetune",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T12:15:27Z |
---
base_model: dev-daly/gemma-3N-finetune
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# dev-daly/gemma-3N-finetune-Q8_0-GGUF
This model was converted to GGUF format from [`dev-daly/gemma-3N-finetune`](https://huggingface.co/dev-daly/gemma-3N-finetune) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/dev-daly/gemma-3N-finetune) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo dev-daly/gemma-3N-finetune-Q8_0-GGUF --hf-file gemma-3n-finetune-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo dev-daly/gemma-3N-finetune-Q8_0-GGUF --hf-file gemma-3n-finetune-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo dev-daly/gemma-3N-finetune-Q8_0-GGUF --hf-file gemma-3n-finetune-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo dev-daly/gemma-3N-finetune-Q8_0-GGUF --hf-file gemma-3n-finetune-q8_0.gguf -c 2048
```
|
Krunal135/results
|
Krunal135
| 2025-08-14T12:41:01Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B",
"lora",
"transformers",
"text-generation",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] |
text-generation
| 2025-08-14T12:40:52Z |
---
library_name: peft
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- base_model:adapter:meta-llama/Meta-Llama-3.1-8B
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 500
### Training results
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.