modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-30 00:44:18
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 536
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-30 00:43:43
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Hachipo/Qwen2.5-7B-MIFT-en_newbase_v2-PIFT-jaen_1000_3
|
Hachipo
| 2025-06-22T17:47:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T17:44:44Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jimmysc171/heart_attack_check_model
|
jimmysc171
| 2025-06-22T17:46:05Z | 0 | 0 | null |
[
"medical",
"license:mit",
"region:us"
] | null | 2025-06-22T17:39:17Z |
---
license: mit
tags:
- medical
---
|
yinita/cpdc_official-q3-8b-sft-3epoch
|
yinita
| 2025-06-22T17:44:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T17:42:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TOMFORD79/kungfu_21
|
TOMFORD79
| 2025-06-22T17:43:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T17:40:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
18-video-full-pakcricketinfo-sapna-shah-vi/leaks.video.sapna.shah.viral.video.original.link
|
18-video-full-pakcricketinfo-sapna-shah-vi
| 2025-06-22T17:42:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T17:42:36Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/npw8at8u?Njei"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
TOMFORD79/king20
|
TOMFORD79
| 2025-06-22T17:42:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T12:51:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kinola-IQ/full_lyrics
|
Kinola-IQ
| 2025-06-22T17:39:25Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-09T11:23:53Z |
---
library_name: transformers
license: mit
base_model: EleutherAI/gpt-neo-125M
tags:
- generated_from_trainer
model-index:
- name: full_lyrics
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full_lyrics
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Ductratra/condenser_v1
|
Ductratra
| 2025-06-22T17:39:25Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-22T17:35:40Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1265 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 8,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
mradermacher/Foundry_24B_V.4-i1-GGUF
|
mradermacher
| 2025-06-22T17:37:29Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"creative",
"roleplay",
"en",
"ru",
"base_model:OddTheGreat/Foundry_24B_V.4",
"base_model:quantized:OddTheGreat/Foundry_24B_V.4",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-22T15:31:25Z |
---
base_model: OddTheGreat/Foundry_24B_V.4
language:
- en
- ru
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- creative
- roleplay
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OddTheGreat/Foundry_24B_V.4
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Foundry_24B_V.4-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Foundry_24B_V.4-i1-GGUF/resolve/main/Foundry_24B_V.4.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
fernandoruiz/EuroLLM-22B-Instruct-Preview-Q4_0-GGUF
|
fernandoruiz
| 2025-06-22T17:36:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:utter-project/EuroLLM-22B-Instruct-Preview",
"base_model:quantized:utter-project/EuroLLM-22B-Instruct-Preview",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-22T17:35:06Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: utter-project/EuroLLM-22B-Instruct-Preview
---
# fernandoruiz/EuroLLM-22B-Instruct-Preview-Q4_0-GGUF
This model was converted to GGUF format from [`utter-project/EuroLLM-22B-Instruct-Preview`](https://huggingface.co/utter-project/EuroLLM-22B-Instruct-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-22B-Instruct-Preview) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fernandoruiz/EuroLLM-22B-Instruct-Preview-Q4_0-GGUF --hf-file eurollm-22b-instruct-preview-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fernandoruiz/EuroLLM-22B-Instruct-Preview-Q4_0-GGUF --hf-file eurollm-22b-instruct-preview-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fernandoruiz/EuroLLM-22B-Instruct-Preview-Q4_0-GGUF --hf-file eurollm-22b-instruct-preview-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fernandoruiz/EuroLLM-22B-Instruct-Preview-Q4_0-GGUF --hf-file eurollm-22b-instruct-preview-q4_0.gguf -c 2048
```
|
zecaihong/999e249f-6b05-4a37-9bc6-b4556645f48a.5
|
zecaihong
| 2025-06-22T17:35:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-06-22T14:55:22Z |
---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 999e249f-6b05-4a37-9bc6-b4556645f48a.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
cosine_min_lr_ratio: 0.05
dataset_prepared_path: null
datasets:
- data_files:
- 9b229213575401f4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 100
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/999e249f-6b05-4a37-9bc6-b4556645f48a.5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: -1
metric_for_best_model: eval_loss
micro_batch_size: 8
mlflow_experiment_name: /data/datasets/9b229213575401f4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 999e249f-6b05-4a37-9bc6-b4556645f48a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 999e249f-6b05-4a37-9bc6-b4556645f48a
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# 999e249f-6b05-4a37-9bc6-b4556645f48a.5
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 2.0725 |
| 1.5616 | 0.1477 | 100 | 1.5921 |
| 1.4783 | 0.2953 | 200 | 1.5058 |
| 1.4387 | 0.4430 | 300 | 1.4631 |
| 1.4083 | 0.5906 | 400 | 1.4305 |
| 1.4014 | 0.7383 | 500 | 1.4090 |
| 1.3967 | 0.8859 | 600 | 1.3902 |
| 1.3449 | 1.0325 | 700 | 1.3784 |
| 1.3439 | 1.1801 | 800 | 1.3607 |
| 1.3281 | 1.3278 | 900 | 1.3506 |
| 1.3237 | 1.4755 | 1000 | 1.3399 |
| 1.3215 | 1.6231 | 1100 | 1.3316 |
| 1.3106 | 1.7708 | 1200 | 1.3245 |
| 1.2916 | 1.9184 | 1300 | 1.3166 |
| 1.286 | 2.0650 | 1400 | 1.3117 |
| 1.2722 | 2.2126 | 1500 | 1.3067 |
| 1.2736 | 2.3603 | 1600 | 1.3027 |
| 1.2722 | 2.5079 | 1700 | 1.2995 |
| 1.2645 | 2.6556 | 1800 | 1.2971 |
| 1.2574 | 2.8032 | 1900 | 1.2954 |
| 1.2653 | 2.9509 | 2000 | 1.2942 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Hachipo/Qwen2.5-7B-MIFT-en_newbase_v2-MIFT-ja_1000_3
|
Hachipo
| 2025-06-22T17:35:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T17:32:25Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saital/Qwen3-0.6B-math-2k-3epoch
|
saital
| 2025-06-22T17:34:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B",
"base_model:finetune:unsloth/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T17:32:13Z |
---
base_model: unsloth/Qwen3-0.6B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** saital
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
brandonbeiler/InternVL3-38B-BNB-8bit
|
brandonbeiler
| 2025-06-22T17:34:12Z | 3 | 0 | null |
[
"safetensors",
"internvl_chat",
"fp8",
"quantization",
"dynamic",
"vision-language",
"multimodal",
"vllm",
"llm-compressor",
"internvl3",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"zh",
"license:mit",
"8-bit",
"region:us"
] |
image-text-to-text
| 2025-06-19T16:48:17Z |
---
language:
- en
- zh
tags:
- fp8
- quantization
- dynamic
- vision-language
- multimodal
- vllm
- llm-compressor
- internvl3
pipeline_tag: image-text-to-text
inference: false
license: mit
---
# π₯ InternVL3-38B-FP8-Dynamic: Optimized Vision-Language Model π₯
This is a **FP8 dynamic quantized** version of [OpenGVLab/InternVL3-38B](https://huggingface.co/OpenGVLab/InternVL3-38B), optimized for high-performance inference with vLLM.
The model utilizes **dynamic FP8 quantization** for optimal ease of use and deployment, achieving significant speedup with minimal accuracy degradation on vision-language tasks.
## π Key Features
- **FP8 Dynamic Quantization**: No calibration required, ready to use immediately
- **Vision-Language Optimized**: Specialized quantization recipe that preserves visual understanding
- **vLLM Ready**: Seamless integration with vLLM for production deployment
- **Memory Efficient**: ~50% memory reduction compared to FP16 original
- **Performance Boost**: Significant faster inference on H100/L40S GPUs
- **Easy Deployment**: No calibration dataset needed for quantization
## π Model Details
- **Original Model**: [OpenGVLab/InternVL3-38B](https://huggingface.co/OpenGVLab/InternVL3-38B)
- **Source Model**: OpenGVLab/InternVL3-38B
- **Quantized Model**: InternVL3-38B-BNB-8bit
- **Quantization Method**: FP8 Dynamic (W8A8)
- **Quantization Library**: [LLM Compressor](https://github.com/vllm-project/llm-compressor) v0.5.2.dev112+g6800f811
- **Quantized by**: [brandonbeiler](https://huggingface.co/brandonbeiler)
## π§ Usage
### With vLLM (Recommended)
```python
from vllm import LLM, SamplingParams
# Load the quantized model
model = LLM(
model="brandonbeiler/InternVL3-38B-BNB-8bit",
trust_remote_code=True,
max_model_len=8192,
tensor_parallel_size=1, # Adjust based on your GPU setup
)
# Generate response
sampling_params = SamplingParams(temperature=0.7, max_tokens=512)
response = model.generate("Describe this image: <image>", sampling_params)
print(response[0].outputs[0].text)
```
## ποΈ Technical Specifications
### Hardware Requirements
- **Inference**: ? VRAM
- **Supported GPUs**: H100, L40S, A100 (80GB), RTX 4090 (2x for tensor parallelism)
- **GPU Architecture**: Ada Lovelace, Hopper (for optimal FP8 performance)
### Quantization Details
- **Weights**: FP8 E4M3 with dynamic per-tensor scales
- **Activations**: FP8 E4M3 with dynamic per-tensor scales
- **Preserved Components**: Vision tower, embeddings, normalization layers, mlp1
## π¬ Package Versions
This model was created using:
```
llmcompressor==0.5.2.dev112+g6800f811
compressed-tensors==latest
transformers==4.52.4
torch==2.7.0
vllm==0.9.1
```
*Quantized with β€οΈ using LLM Compressor for the open-source community*
|
kldzj/Qwen3-235B-A22B-bnb-8bit
|
kldzj
| 2025-06-22T17:30:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-22T16:59:50Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B/blob/main/LICENSE
pipeline_tag: text-generation
---
# Qwen3-235B-A22B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-235B-A22B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 235B in total and 22B activated
- Number of Paramaters (Non-Embedding): 234B
- Number of Layers: 94
- Number of Attention Heads (GQA): 64 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3_moe'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-235B-A22B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B --reasoning-parser qwen3 --tp 8
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-235B-A22B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-235B-A22B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-235B-A22B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
Thermostatic/neuraltranslate-27b-mt-es-nah-v1
|
Thermostatic
| 2025-06-22T17:28:44Z | 0 | 0 | null |
[
"safetensors",
"gemma3",
"Translation",
"Gemma 3",
"Spanish",
"Nahuatl",
"Machine translation",
"es",
"nah",
"dataset:Thermostatic/Axolotl-Spanish-Nahuatl-ShareGPT-Filtered-Splits",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2025-06-22T16:14:57Z |
---
license: mit
datasets:
- Thermostatic/Axolotl-Spanish-Nahuatl-ShareGPT-Filtered-Splits
language:
- es
- nah
tags:
- Translation
- Gemma 3
- Spanish
- Nahuatl
- Machine translation
---

# Model Card for NeuralTranslate
<!-- Provide a quick summary of what the model is/does. -->
THIS MODEL USES GEMMA 3 TEMPLATE.
This is the first official release of NeuralTranslate 27b Machine Translation: Spanish to Nahuatl. The base model is Gemma 3 27b Instruct after being trained in the Axolotl Spanish-Nahuatl Dataset for 4 epochs.
You can donate towards this project at my ko-fi! https://ko-fi.com/irvingernesto
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Irving Ernesto
- **Funded by [optional]:** Irving Ernesto
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mdlufy/angular-mistral-7b
|
mdlufy
| 2025-06-22T17:27:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T17:26:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MrMike42/GameReview-llama3.1-8b-v1-GGUF
|
MrMike42
| 2025-06-22T17:24:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T17:22:46Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MrMike42
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
faturbbx/cv_uas
|
faturbbx
| 2025-06-22T17:23:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T17:18:46Z |
---
license: apache-2.0
---
|
K10S/disease-prediction-distilbert-v3
|
K10S
| 2025-06-22T17:19:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-22T17:19:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/NeoBERT-GGUF
|
mradermacher
| 2025-06-22T17:19:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:tiiuae/falcon-refinedweb",
"base_model:chandar-lab/NeoBERT",
"base_model:quantized:chandar-lab/NeoBERT",
"license:mit",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-06-22T17:16:54Z |
---
base_model: chandar-lab/NeoBERT
datasets:
- tiiuae/falcon-refinedweb
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/chandar-lab/NeoBERT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NeoBERT-GGUF/resolve/main/NeoBERT.f16.gguf) | f16 | 0.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Andrea238/Llama-3.2-1B-Instruct-terapeuticovf
|
Andrea238
| 2025-06-22T17:19:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T17:17:32Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Andrea238
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Alissonerdx/UltraWanComfy
|
Alissonerdx
| 2025-06-22T17:14:25Z | 0 | 10 | null |
[
"dataset:APRIL-AIGC/UltraVideo",
"arxiv:2506.13691",
"base_model:Wan-AI/Wan2.1-T2V-1.3B",
"base_model:finetune:Wan-AI/Wan2.1-T2V-1.3B",
"license:cc-by-4.0",
"region:us"
] | null | 2025-06-20T07:08:50Z |
---
license: cc-by-4.0
datasets:
- APRIL-AIGC/UltraVideo
base_model:
- Wan-AI/Wan2.1-T2V-1.3B
---
# UltraWan LoRA β ComfyUI Conversion (Unofficial)
**This lora only works with the T2V 1.3b model, it does not work with the 14b. The 4k model has some problems, but the 1k model works well.**
This repository contains **converted LoRA weights** from the official [**APRIL-AIGC/UltraWan**](https://huggingface.co/APRIL-AIGC/UltraWan) release.
The only modification made was converting the `.ckpt` / original format into `.safetensors` compatible with **ComfyUI**.
No training, fine-tuning, or architecture changes were performed.
---
## π What This Is
- β
Direct conversion of original UltraWan LoRA weights to **ComfyUI-compatible safetensors format**
- β Not trained, modified, or altered in any way beyond the format change
- π― For users who want to test **UltraWan LoRA** with ComfyUIβs Text-to-Video pipelines
---
## π Attribution
- **Original Model**: [**APRIL-AIGC/UltraWan**](https://huggingface.co/APRIL-AIGC/UltraWan)
- **Base Model Architecture**: [TencentARC/Wan](https://github.com/TencentARC/Wan)
- **License**: CC-BY-4.0 with Additional Restrictions (see below)
> I do **not** claim ownership of the original model, dataset, or LoRA weights.
> All credit goes to the **APRIL-AIGC** team and the authors of the [UltraVideo Dataset](https://arxiv.org/abs/2506.13691).
---
## π License & Use
These weights are subject to the license terms of the **UltraVideo Dataset**, which include:
- β **Non-commercial research only**
- β οΈ No redistribution of raw videos or identifiable content
- β
You may use these weights to run inference in ComfyUI for research/academic purposes
- β You may **not** use these weights or outputs for commercial purposes, product development, or monetization
Please see the full license in [`LICENSE`](./LICENSE).
---
## π§ Recommended LoRA Strength
| LoRA Version | Recommended Strength |
|--------------------|----------------------|
| `ultrawan_1k` | **0.25** |
| `ultrawan_4k` | **0.5** |

> You may experiment with different values, but these are the recommended starting points for best results.
---
## π Contents
| File | Description |
|------|-------------|
| `ultrawan_1k_comfy.safetensors` | LoRA converted to safetensors, ready for ComfyUI |
| `ultrawan_4k_comfy.safetensors` | LoRA converted to safetensors, ready for ComfyUI |
| `LICENSE` | UltraVideo Dataset License (CC-BY-4.0 + Restrictions) |
| `README.md` | This file |
---
## π§© How to Use in ComfyUI
Place the `.safetensors` file in:
```
ComfyUI/models/loras/
```
Then use a **LoRA Loader** node in your workflow and select the LoRA file from the dropdown menu.
---
*This is a community-contributed format conversion. All rights belong to the original creators.*
|
Prince-1/Granite-3.3-2B-Instruct-Onnx
|
Prince-1
| 2025-06-22T17:12:11Z | 0 | 0 |
onnxruntime_genai
|
[
"onnxruntime_genai",
"onnx",
"language",
"granite-3.3",
"text-generation",
"conversational",
"arxiv:0000.00000",
"base_model:ibm-granite/granite-3.3-2b-instruct",
"base_model:quantized:ibm-granite/granite-3.3-2b-instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-22T17:11:49Z |
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: onnxruntime_genai
tags:
- language
- granite-3.3
- onnxruntime_genai
base_model:
- ibm-granite/granite-3.3-2b-instruct
---
# Granite-3.3-8B-Instruct
**Model Summary:**
Granite-3.3-8B-Instruct is a 8-billion parameter 128K context length language model fine-tuned for improved reasoning and instruction-following capabilities. Built on top of Granite-3.3-8B-Base, the model delivers significant gains on benchmarks for measuring generic performance including AlpacaEval-2.0 and Arena-Hard, and improvements in mathematics, coding, and instruction following. It supports structured reasoning through \<think\>\<\/think\> and \<response\>\<\/response\> tags, providing clear separation between internal thoughts and final outputs. The model has been trained on a carefully balanced combination of permissively licensed data and curated synthetic tasks.
- **Developers:** Granite Team, IBM
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Release Date**: April 16th, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
**Intended Use:**
This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
**Capabilities**
* Thinking
* Summarization
* Text classification
* Text extraction
* Question-answering
* Retrieval Augmented Generation (RAG)
* Code related tasks
* Function-calling tasks
* Multilingual dialog use cases
<!-- * Fill-in-the-middle -->
* Long-context tasks including long document/meeting summarization, long document QA, etc.
**Generation:**
This is a simple example of how to use Granite-3.3-8B-Instruct model.
Install the following libraries:
```shell
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
```
Then, copy the snippet from the section that is relevant for your use case.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch
model_path="ibm-granite/granite-3.3-8b-instruct"
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map=device,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path
)
conv = [{"role": "user", "content":"Redesign a common household item to make it more sustainable and user-friendly. Explain the changes and their benefits."}]
input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
set_seed(42)
output = model.generate(
**input_ids,
max_new_tokens=8192,
)
prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print(prediction)
```
**Example Outputs**
- thinking=True
```md
<think>To address this prompt, I'll focus on redesigning a common household item: the plastic water bottle. The goal is to make it more sustainable and user-friendly. Here are the proposed changes and their benefits:
1. **Material**: Replace plastic with biodegradable or recyclable materials like glass or plant-based plastics (PLA).
2. **Design**: Introduce a modular design for easy refilling and cleaning.
3. **Functionality**: Incorporate a built-in water filter for purification.
4. **Durability**: Enhance the bottle's lifespan through robust construction.
5. **User-friendliness**: Simplify the cap mechanism for easy opening and closing.
These changes would significantly reduce plastic waste, promote recycling, ensure access to clean drinking water, and enhance the overall user experience.
Now, I will detail each change and its benefits in a structured format.</think>
<response>### Redesign of the Plastic Water Bottle for Sustainability and User-Friendliness
#### 1. Material: Biodegradable or Recyclable Alternatives
**Change**: Transition from traditional PET plastic to glass or plant-based plastics (Polylactic Acid - PLA).
**Benefits**:
- **Sustainability**: Glass and PLA are biodegradable or highly recyclable, drastically reducing plastic pollution.
- **Health**: Unlike PET, these materials do not leach chemicals into the water, ensuring safer consumption.
#### 2. Modular Design for Easy Refilling and Cleaning
**Change**: Implement a detachable, interchangeable inner liner within a reusable outer casing.
**Benefits**:
- **User-Friendliness**: Users can easily remove the inner liner for thorough cleaning, reducing the risk of bacterial growth.
- **Sustainability**: The outer casing remains intact, encouraging reuse, while the inner liner can be replaced or recycled when worn out.
#### 3. Built-in Water Filter
**Change**: Integrate a removable, replaceable water filter into the bottle cap.
**Benefits**:
- **Accessibility**: Provides clean drinking water directly from any source (tap, river, etc.), eliminating the need for additional filtration devices.
- **Health**: Removes contaminants, bacteria, and impurities, ensuring safe hydration.
- **Sustainability**: Reduces the use of single-use plastic water bottles by offering a purification solution within a reusable container.
#### 4. Enhanced Durability
**Change**: Use high-quality, durable materials and robust construction techniques for both the outer casing and inner liner.
**Benefits**:
- **Longevity**: A more robust design increases the bottle's lifespan, reducing the frequency of replacements.
- **Cost-effectiveness**: Although the initial cost might be higher, the long-term savings from reduced replacements make it economical.
- **Sustainability**: Less frequent disposal means lower environmental impact.
#### 5. Simplified Cap Mechanism
**Change**: Adopt a one-touch, easy-to-open and close cap design, possibly inspired by sports bottle caps.
**Benefits**:
- **User-Friendliness**: Especially beneficial for children, elderly, or those with limited hand dexterity.
- **Convenience**: Quick access to water encourages more frequent hydration, promoting healthier habits.
### Conclusion
This redesigned water bottle addresses key sustainability concerns by minimizing plastic use and waste, while also enhancing user-friendliness through improved functionality, durability, and accessibility. By integrating biodegradable materials, a modular design, a built-in filter, robust construction, and a simple cap mechanism, the bottle not only reduces environmental footprint but also offers a superior user experience.</response>
```
- thinking=False
```md
Item for Redesign: Plastic Bottle (e.g., water or beverage bottle)
Redesign Concept: Eco-Friendly, Modular, and Multi-Functional Reusable Bottle
Changes:
1. Material: Replace the conventional plastic with food-grade, BPA-free, and recyclable materials such as stainless steel or glass. These materials are durable, non-toxic, and have a longer lifespan, reducing waste.
2. Modular Design: Introduce a modular design that allows users to customize their bottle according to their needs. This could include interchangeable caps, mouthpiece sizes, and carrying loops. Users can purchase additional modules as needed, promoting a more sustainable approach by reducing the need to buy an entirely new bottle for different purposes.
3. Integrated Filter: Incorporate a built-in, washable, and reusable filter that can remove impurities and improve the taste of water. This eliminates the need for single-use disposable filters or bottled water, further reducing plastic waste.
4. Smart Cap: Develop a smart cap with a built-in digital display and temperature sensor. This feature allows users to track their daily water intake, set hydration goals, and monitor the temperature of their beverage. The smart cap can be synced with a mobile app for additional functionality, such as reminders and progress tracking.
5. Easy-to-Clean Design: Ensure the bottle has a wide mouth and smooth interior surfaces for easy cleaning. Include a brush for hard-to-reach areas, making maintenance simple and encouraging regular use.
6. Collapsible Structure: Implement a collapsible design that reduces the bottle's volume when not in use, making it more portable and convenient for storage.
Benefits:
1. Sustainability: By using recyclable materials and reducing plastic waste, this redesigned bottle significantly contributes to a more sustainable lifestyle. The modular design and reusable filter also minimize single-use plastic consumption.
2. User-Friendly: The smart cap, easy-to-clean design, and collapsible structure make the bottle convenient and user-friendly. Users can customize their bottle to suit their needs, ensuring a better overall experience.
3. Healthier Option: Using food-grade, BPA-free materials and an integrated filter ensures that the beverages consumed are free from harmful chemicals and impurities, promoting a healthier lifestyle.
4. Cost-Effective: Although the initial investment might be higher, the long-term savings from reduced purchases of single-use plastic bottles and disposable filters make this reusable bottle a cost-effective choice.
5. Encourages Hydration: The smart cap's features, such as hydration tracking and temperature monitoring, can motivate users to stay hydrated and develop healthier habits.
By redesigning a common household item like the plastic bottle, we can create a more sustainable, user-friendly, and health-conscious alternative that benefits both individuals and the environment.
```
**Evaluation Results:**
<table>
<thead>
<caption style="text-align:center"><b>Comparison with different models over various benchmarks<sup id="fnref1"><a href="#fn1">1</a></sup>. Scores of AlpacaEval-2.0 and Arena-Hard are calculated with thinking=True</b></caption>
<tr>
<th style="text-align:left; background-color: #001d6c; color: white;">Models</th>
<th style="text-align:center; background-color: #001d6c; color: white;">Arena-Hard</th>
<th style="text-align:center; background-color: #001d6c; color: white;">AlpacaEval-2.0</th>
<th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th>
<th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th>
<th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th>
<th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard<sup id="fnref2"><a href="#fn2">2</a></sup></th>
<th style="text-align:center; background-color: #001d6c; color: white;">DROP<sup id="fnref3"><a href="#fn3">3</a></sup></th>
<th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th>
<th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th>
<th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th>
<th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th>
<th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th>
</tr></thead>
<tbody>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-2B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">23.3</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">27.17</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">57.11</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">20.55</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">59.79</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.82</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">20.99</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.55</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.45</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">75.26</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">63.59</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">84.7</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-2B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">24.86</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">34.51</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">57.18</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">20.56</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">59.8</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.39</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">23.84</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.02</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">80.13</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.39</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.55</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">83.23</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-2B-Instruct</b></td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 28.86 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 43.45 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 55.88 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 18.4 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 58.97 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 63.91 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 44.33 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 72.48 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 80.51 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 75.68 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 65.8 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">87.47</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Llama-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">36.43</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">27.22</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">69.15</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">28.79</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">52.79</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.43</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">71.23</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">83.24</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.32</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">80.15</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.10</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">83.43</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">DeepSeek-R1-Distill-Llama-8B</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">17.17</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">21.85</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">45.80</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">13.25</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">47.43</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.39</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">49.73</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">72.18</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.54</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">62.91</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.50</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">42.87</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Qwen-2.5-7B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">25.44</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">30.34</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">74.30</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">18.12</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">63.06</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">69.19</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">64.06</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">84.46</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">93.35</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">89.91</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">74.90</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">81.90</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">DeepSeek-R1-Distill-Qwen-7B</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">10.36</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">15.35</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">50.72</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">9.94</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">47.14</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.38</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">51.78</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">78.47</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.89</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">78.43</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">59.10</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">42.45</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">37.58</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">30.34</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.77</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">28.7</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">65.84</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">69.87</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">58.57</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.15</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">89.63</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.79</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.20</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.73</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-8B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">55.25</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.19</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.79</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">28.04</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.92</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">71.86</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">58.29</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">81.65</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">89.35</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.72</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">74.31</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">84.7</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-8B-Instruct</b></td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 57.56 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 62.68 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 65.54 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 26.17 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 66.86 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 69.13 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 59.36 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 80.89 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 89.73 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 86.09 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 74.82 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">88.5</td>
</tr>
</tbody></table>
<table>
<caption style="text-align:center"><b>Math Benchmarks</b></caption>
<thead>
<tr>
<th style="text-align:left; background-color: #001d6c; color: white;">Models</th>
<th style="text-align:center; background-color: #001d6c; color: white;">AIME24</th>
<th style="text-align:center; background-color: #001d6c; color: white;">MATH-500</th>
</tr></thead>
<tbody>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-2B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 0.89 </td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 35.07 </td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-2B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 0.89 </td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 35.54 </td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-2B-Instruct</b></td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 3.28 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 58.09 </td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 1.97 </td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 48.73 </td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-8B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 2.43 </td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 52.8 </td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-8B-Instruct</b></td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 8.12 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 69.02 </td>
</tr>
</tbody></table>
**Training Data:**
Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites.
<!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). -->
**Infrastructure:**
We train Granite-3.3-8B-Instruct using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
**Ethical Considerations and Limitations:**
Granite-3.3-8B-Instruct builds upon Granite-3.3-8B-Base, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.3-8B-Base](https://huggingface.co/ibm-granite/granite-3.3-8b-base) remain relevant.
**Resources**
- βοΈ Learn about the latest updates with Granite: https://www.ibm.com/granite
- π Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- π‘ Learn about the latest Granite learning resources: https://github.com/ibm-granite-community/
<p><a href="#fnref1" title="Jump back to reference">[1]</a> Evaluated using <a href="https://github.com/allenai/olmes">OLMES</a> (except AttaQ and Arena-Hard scores)</p>
<p><a href="#fnref2" title="Jump back to reference">[2]</a> Added regex for more efficient asnwer extraction.</a></p>
<p><a href="#fnref3" title="Jump back to reference">[3]</a> Modified the implementation to handle some of the issues mentioned <a href="https://huggingface.co/blog/open-llm-leaderboard-drop">here</a></p>
<!-- ## Citation
<!-- ## Citation
```
@misc{granite-models,
author = {author 1, author2, ...},
title = {},
journal = {},
volume = {},
year = {2024},
url = {https://arxiv.org/abs/0000.00000},
}
``` -->
|
luxetveritas/mahatas-taxi
|
luxetveritas
| 2025-06-22T17:08:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T16:40:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: mahatas-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="luxetveritas/mahatas-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Prince-1/Granite-3.3-8B-Instruct-Onnx
|
Prince-1
| 2025-06-22T17:08:25Z | 0 | 0 |
onnxruntime_genai
|
[
"onnxruntime_genai",
"onnx",
"language",
"granite-3.3",
"text-generation",
"conversational",
"arxiv:0000.00000",
"base_model:ibm-granite/granite-3.3-8b-instruct",
"base_model:quantized:ibm-granite/granite-3.3-8b-instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-22T17:06:52Z |
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: onnxruntime_genai
tags:
- language
- granite-3.3
- onnxruntime_genai
base_model:
- ibm-granite/granite-3.3-8b-instruct
---
# Granite-3.3-8B-Instruct
**Model Summary:**
Granite-3.3-8B-Instruct is a 8-billion parameter 128K context length language model fine-tuned for improved reasoning and instruction-following capabilities. Built on top of Granite-3.3-8B-Base, the model delivers significant gains on benchmarks for measuring generic performance including AlpacaEval-2.0 and Arena-Hard, and improvements in mathematics, coding, and instruction following. It supports structured reasoning through \<think\>\<\/think\> and \<response\>\<\/response\> tags, providing clear separation between internal thoughts and final outputs. The model has been trained on a carefully balanced combination of permissively licensed data and curated synthetic tasks.
- **Developers:** Granite Team, IBM
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Release Date**: April 16th, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
**Intended Use:**
This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
**Capabilities**
* Thinking
* Summarization
* Text classification
* Text extraction
* Question-answering
* Retrieval Augmented Generation (RAG)
* Code related tasks
* Function-calling tasks
* Multilingual dialog use cases
<!-- * Fill-in-the-middle -->
* Long-context tasks including long document/meeting summarization, long document QA, etc.
**Generation:**
This is a simple example of how to use Granite-3.3-8B-Instruct model.
Install the following libraries:
```shell
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
```
Then, copy the snippet from the section that is relevant for your use case.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch
model_path="ibm-granite/granite-3.3-8b-instruct"
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map=device,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path
)
conv = [{"role": "user", "content":"Redesign a common household item to make it more sustainable and user-friendly. Explain the changes and their benefits."}]
input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
set_seed(42)
output = model.generate(
**input_ids,
max_new_tokens=8192,
)
prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print(prediction)
```
**Example Outputs**
- thinking=True
```md
<think>To address this prompt, I'll focus on redesigning a common household item: the plastic water bottle. The goal is to make it more sustainable and user-friendly. Here are the proposed changes and their benefits:
1. **Material**: Replace plastic with biodegradable or recyclable materials like glass or plant-based plastics (PLA).
2. **Design**: Introduce a modular design for easy refilling and cleaning.
3. **Functionality**: Incorporate a built-in water filter for purification.
4. **Durability**: Enhance the bottle's lifespan through robust construction.
5. **User-friendliness**: Simplify the cap mechanism for easy opening and closing.
These changes would significantly reduce plastic waste, promote recycling, ensure access to clean drinking water, and enhance the overall user experience.
Now, I will detail each change and its benefits in a structured format.</think>
<response>### Redesign of the Plastic Water Bottle for Sustainability and User-Friendliness
#### 1. Material: Biodegradable or Recyclable Alternatives
**Change**: Transition from traditional PET plastic to glass or plant-based plastics (Polylactic Acid - PLA).
**Benefits**:
- **Sustainability**: Glass and PLA are biodegradable or highly recyclable, drastically reducing plastic pollution.
- **Health**: Unlike PET, these materials do not leach chemicals into the water, ensuring safer consumption.
#### 2. Modular Design for Easy Refilling and Cleaning
**Change**: Implement a detachable, interchangeable inner liner within a reusable outer casing.
**Benefits**:
- **User-Friendliness**: Users can easily remove the inner liner for thorough cleaning, reducing the risk of bacterial growth.
- **Sustainability**: The outer casing remains intact, encouraging reuse, while the inner liner can be replaced or recycled when worn out.
#### 3. Built-in Water Filter
**Change**: Integrate a removable, replaceable water filter into the bottle cap.
**Benefits**:
- **Accessibility**: Provides clean drinking water directly from any source (tap, river, etc.), eliminating the need for additional filtration devices.
- **Health**: Removes contaminants, bacteria, and impurities, ensuring safe hydration.
- **Sustainability**: Reduces the use of single-use plastic water bottles by offering a purification solution within a reusable container.
#### 4. Enhanced Durability
**Change**: Use high-quality, durable materials and robust construction techniques for both the outer casing and inner liner.
**Benefits**:
- **Longevity**: A more robust design increases the bottle's lifespan, reducing the frequency of replacements.
- **Cost-effectiveness**: Although the initial cost might be higher, the long-term savings from reduced replacements make it economical.
- **Sustainability**: Less frequent disposal means lower environmental impact.
#### 5. Simplified Cap Mechanism
**Change**: Adopt a one-touch, easy-to-open and close cap design, possibly inspired by sports bottle caps.
**Benefits**:
- **User-Friendliness**: Especially beneficial for children, elderly, or those with limited hand dexterity.
- **Convenience**: Quick access to water encourages more frequent hydration, promoting healthier habits.
### Conclusion
This redesigned water bottle addresses key sustainability concerns by minimizing plastic use and waste, while also enhancing user-friendliness through improved functionality, durability, and accessibility. By integrating biodegradable materials, a modular design, a built-in filter, robust construction, and a simple cap mechanism, the bottle not only reduces environmental footprint but also offers a superior user experience.</response>
```
- thinking=False
```md
Item for Redesign: Plastic Bottle (e.g., water or beverage bottle)
Redesign Concept: Eco-Friendly, Modular, and Multi-Functional Reusable Bottle
Changes:
1. Material: Replace the conventional plastic with food-grade, BPA-free, and recyclable materials such as stainless steel or glass. These materials are durable, non-toxic, and have a longer lifespan, reducing waste.
2. Modular Design: Introduce a modular design that allows users to customize their bottle according to their needs. This could include interchangeable caps, mouthpiece sizes, and carrying loops. Users can purchase additional modules as needed, promoting a more sustainable approach by reducing the need to buy an entirely new bottle for different purposes.
3. Integrated Filter: Incorporate a built-in, washable, and reusable filter that can remove impurities and improve the taste of water. This eliminates the need for single-use disposable filters or bottled water, further reducing plastic waste.
4. Smart Cap: Develop a smart cap with a built-in digital display and temperature sensor. This feature allows users to track their daily water intake, set hydration goals, and monitor the temperature of their beverage. The smart cap can be synced with a mobile app for additional functionality, such as reminders and progress tracking.
5. Easy-to-Clean Design: Ensure the bottle has a wide mouth and smooth interior surfaces for easy cleaning. Include a brush for hard-to-reach areas, making maintenance simple and encouraging regular use.
6. Collapsible Structure: Implement a collapsible design that reduces the bottle's volume when not in use, making it more portable and convenient for storage.
Benefits:
1. Sustainability: By using recyclable materials and reducing plastic waste, this redesigned bottle significantly contributes to a more sustainable lifestyle. The modular design and reusable filter also minimize single-use plastic consumption.
2. User-Friendly: The smart cap, easy-to-clean design, and collapsible structure make the bottle convenient and user-friendly. Users can customize their bottle to suit their needs, ensuring a better overall experience.
3. Healthier Option: Using food-grade, BPA-free materials and an integrated filter ensures that the beverages consumed are free from harmful chemicals and impurities, promoting a healthier lifestyle.
4. Cost-Effective: Although the initial investment might be higher, the long-term savings from reduced purchases of single-use plastic bottles and disposable filters make this reusable bottle a cost-effective choice.
5. Encourages Hydration: The smart cap's features, such as hydration tracking and temperature monitoring, can motivate users to stay hydrated and develop healthier habits.
By redesigning a common household item like the plastic bottle, we can create a more sustainable, user-friendly, and health-conscious alternative that benefits both individuals and the environment.
```
**Evaluation Results:**
<table>
<thead>
<caption style="text-align:center"><b>Comparison with different models over various benchmarks<sup id="fnref1"><a href="#fn1">1</a></sup>. Scores of AlpacaEval-2.0 and Arena-Hard are calculated with thinking=True</b></caption>
<tr>
<th style="text-align:left; background-color: #001d6c; color: white;">Models</th>
<th style="text-align:center; background-color: #001d6c; color: white;">Arena-Hard</th>
<th style="text-align:center; background-color: #001d6c; color: white;">AlpacaEval-2.0</th>
<th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th>
<th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th>
<th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th>
<th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard<sup id="fnref2"><a href="#fn2">2</a></sup></th>
<th style="text-align:center; background-color: #001d6c; color: white;">DROP<sup id="fnref3"><a href="#fn3">3</a></sup></th>
<th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th>
<th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th>
<th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th>
<th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th>
<th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th>
</tr></thead>
<tbody>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-2B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">23.3</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">27.17</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">57.11</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">20.55</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">59.79</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.82</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">20.99</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.55</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.45</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">75.26</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">63.59</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">84.7</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-2B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">24.86</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">34.51</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">57.18</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">20.56</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">59.8</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.39</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">23.84</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.02</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">80.13</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.39</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.55</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">83.23</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-2B-Instruct</b></td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 28.86 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 43.45 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 55.88 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 18.4 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 58.97 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 63.91 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 44.33 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 72.48 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 80.51 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 75.68 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 65.8 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">87.47</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Llama-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">36.43</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">27.22</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">69.15</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">28.79</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">52.79</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.43</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">71.23</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">83.24</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.32</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">80.15</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.10</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">83.43</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">DeepSeek-R1-Distill-Llama-8B</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">17.17</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">21.85</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">45.80</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">13.25</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">47.43</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.39</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">49.73</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">72.18</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.54</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">62.91</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.50</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">42.87</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Qwen-2.5-7B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">25.44</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">30.34</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">74.30</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">18.12</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">63.06</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">69.19</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">64.06</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">84.46</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">93.35</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">89.91</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">74.90</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">81.90</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">DeepSeek-R1-Distill-Qwen-7B</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">10.36</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">15.35</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">50.72</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">9.94</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">47.14</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.38</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">51.78</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">78.47</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.89</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">78.43</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">59.10</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">42.45</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">37.58</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">30.34</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.77</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">28.7</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">65.84</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">69.87</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">58.57</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.15</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">89.63</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.79</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.20</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.73</td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-8B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">55.25</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.19</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.79</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">28.04</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.92</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">71.86</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">58.29</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">81.65</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">89.35</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.72</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">74.31</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">84.7</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-8B-Instruct</b></td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 57.56 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 62.68 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 65.54 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 26.17 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 66.86 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 69.13 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 59.36 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 80.89 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 89.73 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 86.09 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 74.82 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">88.5</td>
</tr>
</tbody></table>
<table>
<caption style="text-align:center"><b>Math Benchmarks</b></caption>
<thead>
<tr>
<th style="text-align:left; background-color: #001d6c; color: white;">Models</th>
<th style="text-align:center; background-color: #001d6c; color: white;">AIME24</th>
<th style="text-align:center; background-color: #001d6c; color: white;">MATH-500</th>
</tr></thead>
<tbody>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-2B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 0.89 </td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 35.07 </td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-2B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 0.89 </td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 35.54 </td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-2B-Instruct</b></td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 3.28 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 58.09 </td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 1.97 </td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 48.73 </td>
</tr>
<tr>
<td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-8B-Instruct</td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 2.43 </td>
<td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 52.8 </td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-8B-Instruct</b></td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 8.12 </td>
<td style="text-align:center; background-color: #DAE8FF; color: black;"> 69.02 </td>
</tr>
</tbody></table>
**Training Data:**
Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites.
<!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). -->
**Infrastructure:**
We train Granite-3.3-8B-Instruct using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
**Ethical Considerations and Limitations:**
Granite-3.3-8B-Instruct builds upon Granite-3.3-8B-Base, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.3-8B-Base](https://huggingface.co/ibm-granite/granite-3.3-8b-base) remain relevant.
**Resources**
- βοΈ Learn about the latest updates with Granite: https://www.ibm.com/granite
- π Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- π‘ Learn about the latest Granite learning resources: https://github.com/ibm-granite-community/
<p><a href="#fnref1" title="Jump back to reference">[1]</a> Evaluated using <a href="https://github.com/allenai/olmes">OLMES</a> (except AttaQ and Arena-Hard scores)</p>
<p><a href="#fnref2" title="Jump back to reference">[2]</a> Added regex for more efficient asnwer extraction.</a></p>
<p><a href="#fnref3" title="Jump back to reference">[3]</a> Modified the implementation to handle some of the issues mentioned <a href="https://huggingface.co/blog/open-llm-leaderboard-drop">here</a></p>
<!-- ## Citation
<!-- ## Citation
```
@misc{granite-models,
author = {author 1, author2, ...},
title = {},
journal = {},
volume = {},
year = {2024},
url = {https://arxiv.org/abs/0000.00000},
}
``` -->
|
JulianChang/Qwen3-8B-Q8_0-GGUF
|
JulianChang
| 2025-06-22T17:08:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-22T17:07:31Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-8B
tags:
- llama-cpp
- gguf-my-repo
---
# JulianChang/Qwen3-8B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-8B`](https://huggingface.co/Qwen/Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo JulianChang/Qwen3-8B-Q8_0-GGUF --hf-file qwen3-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo JulianChang/Qwen3-8B-Q8_0-GGUF --hf-file qwen3-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo JulianChang/Qwen3-8B-Q8_0-GGUF --hf-file qwen3-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo JulianChang/Qwen3-8B-Q8_0-GGUF --hf-file qwen3-8b-q8_0.gguf -c 2048
```
|
OdiaGenAI/odiagenAI_llama7b_base_v1
|
OdiaGenAI
| 2025-06-22T17:05:05Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-26T19:07:32Z |
---
license: cc-by-nc-4.0
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "OdiaGenAI/odiagenAI_llama7b_base_v1"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code = True)
prompt = "ΰ¬ΰ¬Ύΰ¬°ΰ¬€ ବିଷΰΰ¬°ΰ ΰ¬ΰ¬Ώΰ¬ΰ¬Ώ ΰ¬ΰହନΰ଀ΰ"
inputs = tokenizer.encode(prompt, return_tensors="pt").to(device)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(model_id).to(device)
outputs = model.generate(inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
prithivMLmods/WebClick-AgentBrowse-SigLIP2
|
prithivMLmods
| 2025-06-22T17:03:27Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"image-classification",
"agentbrowse",
"calendars",
"humanbrowse",
"SigLIP2",
"en",
"dataset:Hcompany/WebClick",
"arxiv:2502.14786",
"base_model:google/siglip2-base-patch16-224",
"base_model:finetune:google/siglip2-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-21T07:44:24Z |
---
license: apache-2.0
datasets:
- Hcompany/WebClick
base_model:
- google/siglip2-base-patch16-224
language:
- en
pipeline_tag: image-classification
library_name: transformers
tags:
- agentbrowse
- calendars
- humanbrowse
- SigLIP2
---

# **WebClick-AgentBrowse-SigLIP2**
> **WebClick-AgentBrowse-SigLIP2** is a vision-language encoder model fine-tuned from [`google/siglip2-base-patch16-224`](https://huggingface.co/google/siglip2-base-patch16-224) for **multi-class image classification**.
It is trained to detect and classify web UI click regions into three classes: `agentbrowse`, `calendars`, and `humanbrowse`. The model utilizes the `SiglipForImageClassification` architecture.
> \[!note]
> **SigLIP 2**: *Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features*
> [https://arxiv.org/pdf/2502.14786](https://arxiv.org/pdf/2502.14786)
> [!note]
agent-browse / calendars / human-browse
---
```py
Classification Report:
precision recall f1-score support
agentbrowse 0.9556 0.8763 0.9142 590
calendars 0.9707 0.9413 0.9558 528
humanbrowse 0.8481 0.9539 0.8979 521
accuracy 0.9219 1639
macro avg 0.9248 0.9238 0.9226 1639
weighted avg 0.9263 0.9219 0.9224 1639
```

---
## Label Space: 3 Classes
```
Class 0: agentbrowse
Class 1: calendars
Class 2: humanbrowse
````
---
## Install Dependencies
```bash
pip install -q transformers torch pillow gradio hf_xet
````
---
## Inference Code
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/WebClick-AgentBrowse-SigLIP2" # Replace with actual HF model repo
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Updated label mapping
id2label = {
"0": "agentbrowse",
"1": "calendars",
"2": "humanbrowse"
}
def classify_image(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_image,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=3, label="Click Type Classification"),
title="WebClick AgentBrowse Classifier",
description="Upload a web UI screenshot to classify regions: agentbrowse, calendars, or humanbrowse."
)
if __name__ == "__main__":
iface.launch()
```
---
## ID2Label Testing
```py
%%capture
!pip install datasets==3.2.0
```
```py
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Hcompany/WebClick")
# Extract unique masterCategory values (assuming it's a string field)
labels = sorted(set(example["bucket"] for example in dataset["test"]))
# Create id2label mapping
id2label = {str(i): label for i, label in enumerate(labels)}
# Print the mapping
print(id2label)
```
```
{'0': 'agentbrowse', '1': 'calendars', '2': 'humanbrowse'}
```
---
## Intended Use
**WebClick-AgentBrowse-SigLIP2** is intended for:
* **UI Understanding** β Classify user interaction zones in web interface screenshots.
* **Multimodal Agents** β Enhance visual perception for agent planning or RPA systems.
* **Interface Automation** β Facilitate click zone detection for automated agents.
* **Web Analytics** β Analyze user behavior patterns based on layout interaction predictions.
|
nic-festa/smol-function-calling
|
nic-festa
| 2025-06-22T17:00:24Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T15:22:46Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KarthikAI/InstantID-img2img
|
KarthikAI
| 2025-06-22T16:59:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"en",
"arxiv:2401.07519",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-06-22T14:51:10Z |
---
license: apache-2.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
# InstantID Model Card
<div align="center">
[**Project Page**](https://instantid.github.io/) **|** [**Paper**](https://arxiv.org/abs/2401.07519) **|** [**Code**](https://github.com/InstantID/InstantID) **|** [π€ **Gradio demo**](https://huggingface.co/spaces/InstantX/InstantID)
</div>
## Introduction
InstantID is a new state-of-the-art tuning-free method to achieve ID-Preserving generation with only single image, supporting various downstream tasks.
<div align="center">
<img src='examples/applications.png'>
</div>
## Usage
You can directly download the model in this repository.
You also can download the model in python script:
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/config.json", local_dir="./checkpoints")
hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/diffusion_pytorch_model.safetensors", local_dir="./checkpoints")
hf_hub_download(repo_id="InstantX/InstantID", filename="ip-adapter.bin", local_dir="./checkpoints")
```
For face encoder, you need to manutally download via this [URL](https://github.com/deepinsight/insightface/issues/1896#issuecomment-1023867304) to `models/antelopev2`.
```python
# !pip install opencv-python transformers accelerate insightface
import diffusers
from diffusers.utils import load_image
from diffusers.models import ControlNetModel
import cv2
import torch
import numpy as np
from PIL import Image
from insightface.app import FaceAnalysis
from pipeline_stable_diffusion_xl_instantid import StableDiffusionXLInstantIDPipeline, draw_kps
# prepare 'antelopev2' under ./models
app = FaceAnalysis(name='antelopev2', root='./', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
# prepare models under ./checkpoints
face_adapter = f'./checkpoints/ip-adapter.bin'
controlnet_path = f'./checkpoints/ControlNetModel'
# load IdentityNet
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
pipe = StableDiffusionXLInstantIDPipeline.from_pretrained(
... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16
... )
pipe.cuda()
# load adapter
pipe.load_ip_adapter_instantid(face_adapter)
```
Then, you can customized your own face images
```python
# load an image
image = load_image("your-example.jpg")
# prepare face emb
face_info = app.get(cv2.cvtColor(np.array(face_image), cv2.COLOR_RGB2BGR))
face_info = sorted(face_info, key=lambda x:(x['bbox'][2]-x['bbox'][0])*x['bbox'][3]-x['bbox'][1])[-1] # only use the maximum face
face_emb = face_info['embedding']
face_kps = draw_kps(face_image, face_info['kps'])
pipe.set_ip_adapter_scale(0.8)
prompt = "analog film photo of a man. faded film, desaturated, 35mm photo, grainy, vignette, vintage, Kodachrome, Lomography, stained, highly detailed, found footage, masterpiece, best quality"
negative_prompt = "(lowres, low quality, worst quality:1.2), (text:1.2), watermark, painting, drawing, illustration, glitch, deformed, mutated, cross-eyed, ugly, disfigured (lowres, low quality, worst quality:1.2), (text:1.2), watermark, painting, drawing, illustration, glitch,deformed, mutated, cross-eyed, ugly, disfigured"
# generate image
image = pipe(
... prompt, image_embeds=face_emb, image=face_kps, controlnet_conditioning_scale=0.8
... ).images[0]
```
For more details, please follow the instructions in our [GitHub repository](https://github.com/InstantID/InstantID).
## Usage Tips
1. If you're not satisfied with the similarity, try to increase the weight of "IdentityNet Strength" and "Adapter Strength".
2. If you feel that the saturation is too high, first decrease the Adapter strength. If it is still too high, then decrease the IdentityNet strength.
3. If you find that text control is not as expected, decrease Adapter strength.
4. If you find that realistic style is not good enough, go for our Github repo and use a more realistic base model.
## Demos
<div align="center">
<img src='examples/0.png'>
</div>
<div align="center">
<img src='examples/1.png'>
</div>
## Disclaimer
This project is released under Apache License and aims to positively impact the field of AI-driven image generation. Users are granted the freedom to create images using this tool, but they are obligated to comply with local laws and utilize it responsibly. The developers will not assume any responsibility for potential misuse by users.
## Citation
```bibtex
@article{wang2024instantid,
title={InstantID: Zero-shot Identity-Preserving Generation in Seconds},
author={Wang, Qixun and Bai, Xu and Wang, Haofan and Qin, Zekui and Chen, Anthony},
journal={arXiv preprint arXiv:2401.07519},
year={2024}
}
```
|
williamplacroix/final_llama_normal
|
williamplacroix
| 2025-06-22T16:59:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-06-22T16:16:40Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
diegowlp/jobfinder-coverletter-model
|
diegowlp
| 2025-06-22T16:54:53Z | 0 | 0 | null |
[
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T16:53:29Z |
---
license: apache-2.0
---
|
IlmaJiyadh/phi3-small-merged
|
IlmaJiyadh
| 2025-06-22T16:54:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-22T16:52:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bittu9988/Openchat-finetrained-model-AGG
|
bittu9988
| 2025-06-22T16:52:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T16:52:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Razgony/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-shiny_polished_dinosaur
|
Razgony
| 2025-06-22T16:50:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am shiny polished dinosaur",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T00:56:43Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-shiny_polished_dinosaur
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am shiny polished dinosaur
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-shiny_polished_dinosaur
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Razgony/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-shiny_polished_dinosaur", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BootesVoid/cmc7v20yq0acxbfifap1cjmka_cmc7v9znr0adybfifi9pccpef
|
BootesVoid
| 2025-06-22T16:49:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T16:49:28Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY
---
# Cmc7V20Yq0Acxbfifap1Cjmka_Cmc7V9Znr0Adybfifi9Pccpef
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY",
"lora_weights": "https://huggingface.co/BootesVoid/cmc7v20yq0acxbfifap1cjmka_cmc7v9znr0adybfifi9pccpef/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc7v20yq0acxbfifap1cjmka_cmc7v9znr0adybfifi9pccpef', weight_name='lora.safetensors')
image = pipeline('SEXY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc7v20yq0acxbfifap1cjmka_cmc7v9znr0adybfifi9pccpef/discussions) to add images that show off what youβve made with this LoRA.
|
debadtman26/local-healthcare-chatbot
|
debadtman26
| 2025-06-22T16:48:31Z | 0 | 1 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mlx-community/Ministral-8B-Instruct-2410-4bit",
"base_model:quantized:mlx-community/Ministral-8B-Instruct-2410-4bit",
"license:other",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-22T16:46:44Z |
---
base_model: mlx-community/Ministral-8B-Instruct-2410-4bit
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
library_name: mlx
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
tags:
- mlx
inference: false
extra_gated_prompt: '# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that
is not expressly authorized under this Agreement, You must request a license from
Mistral AI, which Mistral AI may grant to You in Mistral AI''s sole discretion.
To discuss such a license, please contact Mistral AI via the website contact form:
https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification,
or Distribution of any Mistral Model by You, regardless of the source You obtained
a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model,
or by creating, using or distributing a Derivative of the Mistral Model, You agree
to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on
behalf of Your employer or another person or entity, You warrant and represent that
You have the authority to act and accept this Agreement on their behalf. In such
a case, the word "You" in this Agreement will refer to Your employer or such other
person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants
You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable,
limited license to use, copy, modify, and Distribute under the conditions provided
in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral
AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.**
Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or
Derivatives made by or for Mistral AI, under the following conditions: You must
make available a copy of this Agreement to third-party recipients of the Mistral
Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified
that any rights to use the Mistral Models and/or Derivatives made by or for Mistral
AI shall be directly granted by Mistral AI to said third-party recipients pursuant
to the Mistral AI Research License agreement executed between these parties; You
must retain in all copies of the Mistral Models the following attribution notice
within a "Notice" text file distributed as part of such copies: "Licensed by Mistral
AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below,
You may Distribute any Derivatives made by or for You under additional or different
terms and conditions, provided that: In any event, the use and modification of Mistral
Model and/or Derivatives made by or for Mistral AI shall remain governed by the
terms and conditions of this Agreement; You include in any such Derivatives made
by or for You prominent notices stating that You modified the concerned Mistral
Model; and Any terms and conditions You impose on any third-party recipients relating
to Derivatives made by or for You shall neither limit such third-party recipients''
use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance
with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means,
that the Derivatives made by or for You and/or any modified version of the Mistral
Model You Distribute under your name and responsibility is an official product of
Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You
are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether
or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and
in connection with the Mistral Models, You may not use any name or mark owned by
or associated with Mistral AI or any of its affiliates, except (i) as required for
reasonable and customary use in describing and Distributing the Mistral Models and
Derivatives made by or for Mistral AI and (ii) for attribution purposes as required
by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely
responsible for the Outputs You generate and their subsequent uses in accordance
with this Agreement. Any Outputs shall be subject to the restrictions set out in
Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives
that You may create or that may be created for You shall be subject to the restrictions
set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law
(such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral
AI be liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this Agreement
or out of the use or inability to use the Mistral Models and Derivatives (including
but not limited to damages for loss of data, loss of goodwill, loss of expected
profit or savings, work stoppage, computer failure or malfunction, or any damage
caused by malware or security breaches), even if Mistral AI has been advised of
the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from
and against any claims, damages, or losses arising out of or related to Your use
or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral
AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent
nor warrant that the Mistral Models and Derivatives will be error-free, meet Your
or any third party''s requirements, be secure or will allow You or any third party
to achieve any kind of result or generate any kind of content. You are solely responsible
for determining the appropriateness of using or Distributing the Mistral Models
and Derivatives and assume any risks associated with Your exercise of rights under
this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of
this Agreement or access to the concerned Mistral Models or Derivatives and will
continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You
are in breach of this Agreement. Upon termination of this Agreement, You must cease
to use all Mistral Models and Derivatives and shall permanently delete any copy
thereof. The following provisions, in their relevant parts, will survive any termination
or expiration of this Agreement, each for the duration necessary to achieve its
own intended purpose (e.g. the liability provision will survive until the end of
the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination)
and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us
or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging
that the Model or a Derivative, or any part thereof, infringe upon intellectual
property or other rights owned or licensable by You, then any licenses granted to
You under this Agreement will immediately terminate as of the date such legal action
or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France,
without regard to choice of law principles, and the UN Convention on Contracts for
the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction
of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid,
illegal or unenforceable, the remaining provisions shall be unaffected thereby and
remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access,
use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but
not limited to any customized or fine-tuned version thereof), (ii) work based on
the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying,
providing or making available, by any means, a copy of the Mistral Models and/or
the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French sociΓ©tΓ© par actions simplifiΓ©e
registered in the Paris commercial registry under the number 952 418 325, and having
its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements
which include algorithms, software, instructed checkpoints, parameters, source code
(inference code, evaluation code and, if applicable, fine-tuning code) and any other
elements associated thereto made available by Mistral AI under this Agreement, including,
if any, the technical documentation, manuals and instructions for the use and operation
thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that
is solely for (a) personal, scientific or academic research, and (b) for non-profit
and non-commercial purposes, and not directly or indirectly connected to any commercial
activities or business operations. For illustration purposes, Research Purposes
does not include (1) any usage of the Mistral Model, Derivative or Output by individuals
or contractors employed in or engaged by companies in the context of (a) their daily
tasks, or (b) any activity (including but not limited to any testing or proof-of-concept)
that is intended to generate revenue, nor (2) any Distribution by a commercial entity
of the Mistral Model, Derivative or Output whether in return for payment or free
of charge, in any medium or form, including but not limited to through a hosted
or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or
the Derivatives from a prompt (i.e., text instructions) provided by users. For
the avoidance of doubt, Outputs do not include any components of a Mistral Models,
such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral
AI.
*Mistral AI processes your personal data below to provide the model and enforce
its license. If you are affiliated with a commercial entity, we may also send you
communications about our models. For more information on your rights and data handling,
please see our <a href="https://mistral.ai/terms/">privacy policy</a>.*'
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
? I understand that if I am a commercial entity, I am not permitted to use or distribute
the model internally or externally, or expose it in my own offerings without a
commercial license
: checkbox
? I understand that if I upload the model, or any derivative version, on any platform,
I must include the Mistral Research License
: checkbox
? I understand that for commercial use of the model, I can contact Mistral or use
the Mistral AI API on la Plateforme or any of our cloud provider partners
: checkbox
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Mistral Privacy Policy
: checkbox
geo: ip_location
extra_gated_description: Mistral AI processes your personal data below to provide
the model and enforce its license. If you are affiliated with a commercial entity,
we may also send you communications about our models. For more information on your
rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy
policy</a>.
extra_gated_button_content: Submit
pipeline_tag: text-generation
---
|
Hachipo/Qwen2.5-7B-MIFT-en_newbase_v2
|
Hachipo
| 2025-06-22T16:48:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T16:45:34Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
App54gdkfs4/4hMB2kGh6gzEbf
|
App54gdkfs4
| 2025-06-22T16:48:13Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T16:48:13Z |
---
license: apache-2.0
---
|
dhanraj2006/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wise_yawning_heron
|
dhanraj2006
| 2025-06-22T16:47:17Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am wise yawning heron",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T18:47:02Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wise_yawning_heron
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am wise yawning heron
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wise_yawning_heron
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dhanraj2006/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wise_yawning_heron", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
silx-ai/Quasar-V4-Tiny-Post
|
silx-ai
| 2025-06-22T16:43:04Z | 117 | 0 | null |
[
"pytorch",
"infinity_former",
"custom_code",
"dataset:HuggingFaceTB/smoltalk",
"base_model:silx-ai/QuasarV4-Tiny",
"base_model:finetune:silx-ai/QuasarV4-Tiny",
"region:us"
] | null | 2025-06-21T09:36:25Z |
---
datasets:
- HuggingFaceTB/smoltalk
base_model:
- silx-ai/QuasarV4-Tiny
---
# π§ Quasar-V4-Tiny (Post-trained)
**Model ID:** `silx-ai/Quasar-V4-Tiny-Post`
**Architecture:** Linear Attention with Kernel Feature Maps
**Developed by:** SILX AI
**Powered by:** [gputrader.io](https://gputrader.io)
---
## π Description
This is the **post-trained** version of `Quasar-V4-Tiny`, an experimental model that uses **Linear Attention with Kernel Feature Maps**.
The model architecture is under development to explore efficient attention mechanisms as an alternative to standard transformers.
This version is trained using the [SmolTalk dataset](https://huggingface.co/datasets/jondurbin/smol-talk) on a **very small batch size** and **few optimization steps**.
π§ The purpose of this checkpoint is **not** to generate high-quality or accurate outputs.
β
It is intended only to **validate that the Quasar-V4 architecture works end-to-end** (pretraining β finetuning β inference).
---
## π Training Details
- **Base training tokens:** ~1β2 billion tokens
- **Post-training dataset:** SmolTalk
- **Batch size:** Very small (experimental)
- **Steps:** Minimal, only for architecture testing
---
## β οΈ Limitations
- Not suitable for production or research use.
- Outputs are likely to be low-quality or inconsistent.
- This checkpoint is primarily for internal debugging and architecture validation.
---
## π Acknowledgements
Special thanks to **[gputrader.io](https://gputrader.io)** for providing the compute resources that made this experiment possible.
---
## π¬ Future Work
We plan to scale up the architecture, pretrain on larger datasets, and benchmark the model for meaningful downstream tasks once the design is validated.
Stay tuned.
|
csikasote/whisper-medium-bemgen-female-42
|
csikasote
| 2025-06-22T16:38:02Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:bemgen",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-22T15:03:34Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- bemgen
metrics:
- wer
model-index:
- name: whisper-medium-bemgen-female-42
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: bemgen
type: bemgen
metrics:
- name: Wer
type: wer
value: 0.5420087575259989
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-bemgen-female-42
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the bemgen dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7313
- Wer: 0.5420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.6518 | 0.5468 | 200 | 0.9219 | 0.6528 |
| 0.3634 | 1.0930 | 400 | 0.8127 | 0.5904 |
| 0.3602 | 1.6398 | 600 | 0.7313 | 0.5420 |
| 0.1629 | 2.1859 | 800 | 0.7563 | 0.5376 |
| 0.1727 | 2.7327 | 1000 | 0.7458 | 0.5324 |
| 0.0908 | 3.2789 | 1200 | 0.8193 | 0.5371 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
miike-ai/DeepSeek-R1-0528-Qwen3-11B
|
miike-ai
| 2025-06-22T16:29:12Z | 42 | 0 | null |
[
"safetensors",
"qwen3",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"region:us"
] | null | 2025-06-08T20:36:55Z |
---
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
---
|
PREM4518/fine-tuned-sentiment-model
|
PREM4518
| 2025-06-22T16:26:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T16:22:41Z |
# Clone the empty model repository
git clone https://huggingface.co/PREM4518/fine-tuned-sentiment-model
# Go inside the folder
cd fine-tuned-sentiment-model
# Copy your model files from your local 'trained_model' folder
cp -r ../Prem/trained_model/* .
# Set up Git LFS
git lfs install
git lfs track "*.bin"
# Add and push the files
git add .
git commit -m "Uploading fine-tuned sentiment model"
git push
|
SabahNawab/llama3.2_3B-urdu-qlora_0
|
SabahNawab
| 2025-06-22T16:22:52Z | 14 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"region:us"
] | null | 2025-06-20T10:04:35Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- generated_from_trainer
model-index:
- name: llama3.2_3B-urdu-qlora_0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3.2_3B-urdu-qlora_0
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7976 | 1.0 | 469 | 1.7886 |
| 1.6975 | 2.0 | 938 | 1.7401 |
| 1.6522 | 3.0 | 1407 | 1.7223 |
| 1.6255 | 4.0 | 1876 | 1.7170 |
| 1.6085 | 4.9899 | 2340 | 1.7160 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
saketh-chervu/wordle-agent-sft-with-dpo-golden-pairs
|
saketh-chervu
| 2025-06-22T16:21:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T16:16:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TeetouchQQ/exp-model4-Q4_K_M-GGUF
|
TeetouchQQ
| 2025-06-22T16:14:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:TeetouchQQ/exp-model4",
"base_model:quantized:TeetouchQQ/exp-model4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-22T16:14:41Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: TeetouchQQ/exp-model4
---
# TeetouchQQ/exp-model4-Q4_K_M-GGUF
This model was converted to GGUF format from [`TeetouchQQ/exp-model4`](https://huggingface.co/TeetouchQQ/exp-model4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TeetouchQQ/exp-model4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo TeetouchQQ/exp-model4-Q4_K_M-GGUF --hf-file exp-model4-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo TeetouchQQ/exp-model4-Q4_K_M-GGUF --hf-file exp-model4-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo TeetouchQQ/exp-model4-Q4_K_M-GGUF --hf-file exp-model4-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo TeetouchQQ/exp-model4-Q4_K_M-GGUF --hf-file exp-model4-q4_k_m.gguf -c 2048
```
|
delta-lab-ai/coder
|
delta-lab-ai
| 2025-06-22T16:14:19Z | 100 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2502.00963",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-19T18:57:07Z |
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
This repository contains the model used for autoformalization and reasoning as presented in [PDE-Controller: LLMs for Autoformalization and Reasoning of PDEs](https://huggingface.co/papers/2502.00963).
Project page: https://pde-controller.github.io/
Code: The code implementation can be found at https://github.com/Delta-Lab-AI/PDE-Controller
|
lgrcia/ballet
|
lgrcia
| 2025-06-22T16:13:10Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-06-11T14:31:26Z |
---
license: mit
---
# ballet
A centroiding model for stars cutout images (flax/jax based).
|
RenatoWBS/IFAW50
|
RenatoWBS
| 2025-06-22T15:56:32Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:cc-by-nd-4.0",
"region:us"
] |
text-to-image
| 2025-06-22T15:53:52Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: ifa, w50,truck,lkw
output:
url: >-
images/IFA LKW W50 4x4 Expedition 3D Modell Ansicht von schrΓ€g vorn in
einer Winterlandschaft.jpg
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
instance_prompt: ifa, w50, lkw, truck
license: cc-by-nd-4.0
---
# IFAW50
<Gallery />
## Model description
Modell ifaW50
## Trigger words
You should use `ifa` to trigger the image generation.
You should use `w50` to trigger the image generation.
You should use `lkw` to trigger the image generation.
You should use `truck` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/RenatoWBS/IFAW50/tree/main) them in the Files & versions tab.
|
QinShiHuangisavailable/output
|
QinShiHuangisavailable
| 2025-06-22T15:53:21Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:deepseek-ai/deepseek-math-7b-rl",
"base_model:finetune:deepseek-ai/deepseek-math-7b-rl",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T16:13:58Z |
---
base_model: deepseek-ai/deepseek-math-7b-rl
library_name: transformers
model_name: output
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for output
This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="QinShiHuangisavailable/output", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ishayankoo/ppo-LunarLander-v2
|
ishayankoo
| 2025-06-22T15:50:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T15:50:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.99 +/- 11.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
methinkss/ffs8
|
methinkss
| 2025-06-22T15:49:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T14:32:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ZihminWang/TinyLlama-1.1B-Chat-v1.0-user-intention-v0.2
|
ZihminWang
| 2025-06-22T15:46:26Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T15:46:26Z |
---
license: apache-2.0
---
|
Mungert/II-Medical-8B-1706-GGUF
|
Mungert
| 2025-06-22T15:45:49Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"arxiv:2503.19633",
"arxiv:2501.19393",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-22T10:29:13Z |
---
library_name: transformers
tags: []
---
# <span style="color: #7FFF7F;">II-Medical-8B-1706 GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`aa0ef5c5`](https://github.com/ggerganov/llama.cpp/commit/aa0ef5c578eef4c2adc7be1282f21bab5f3e8d26).
---
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
π [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedbackβhave you tried this? How does it perform for you?**
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
# II-Medical-8B-1706
<div style="display: flex; justify-content: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6389496ff7d3b0df092095ed/73Y-oDmehp0eJ2HWrfn3V.jpeg" width="800">
</div>
## I. Model Overview
II-Medical-8B-1706 is the newest advanced large language model developed by Intelligent Internet, specifically engineered to enhance AI-driven medical reasoning. Following the positive reception of our previous [II-Medical-8B](https://huggingface.co/Intelligent-Internet/II-Medical-8B), this new iteration significantly advances the capabilities of medical question answering,
We also provide the static quants versions of II-Medical-8B-1706 [here](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF)
## II. Training Methodology
We collected and generated a comprehensive set of reasoning datasets for the medical domain and performed SFT fine-tuning on the **Qwen/Qwen3-8B** model. Following this, we further optimized the SFT model by training DAPO on a hard-reasoning dataset to boost performance.
For SFT stage we using the hyperparameters:
- Max Length: 16378.
- Batch Size: 128.
- Learning-Rate: 5e-5.
- Number Of Epoch: 6.
For the Reinforcement Learning (RL) stage, we designed a two-stage training process. The first stage focuses on enhancing the model's reasoning capabilities for complex medical questions. The second stage ensures that the model's responses prioritize safety and helpfulness. Both stages utilize the following configuration:
- Max prompt length: 2048 tokens.
- Max response length: 12288 tokens.
- Overlong buffer: Enabled, 4096 tokens, penalty factor 1.0.
- Clip ratios: Low 0.2, High 0.28.
- Batch sizes: Train prompt 512, Generation prompt 1536, Mini-batch 32.
- Responses per prompt: 16.
- Temperature: 1.0, Top-p: 1.0, Top-k: -1 (vLLM rollout).
- Learning rate: 1e-6, Warmup steps: 10, Weight decay: 0.1.
- Loss aggregation: Token-mean.
- Gradient clipping: 1.0.
- Entropy coefficient: 0.
## III. Evaluation Results


Our II-Medical-8B-1706 model achieved a 46.8% score on [HealthBench](https://openai.com/index/healthbench/), a comprehensive open-source benchmark evaluating the performance and safety of large language models in healthcare. This performance is comparable to MedGemma-27B from Google. We provide a comparison to models available in ChatGPT below.
<!--  -->
Detailed result for HealthBench can be found [here](https://huggingface.co/datasets/Intelligent-Internet/OpenAI-HealthBench-II-Medical-8B-1706-GPT-4.1).
<!--  -->
We also evaluate on nine other medical QA benchmarks include MedMCQA, MedQA, PubMedQA, HealthBench, medical related questions from MMLU-Pro, small QA sets from Lancet and the New England
Journal of Medicine, 4 Options and 5 Options splits from the MedBullets platform and MedXpertQA.
| Model | MedMC | MedQA | PubMed | MMLU-P | HealthBench | Lancet | MedB-4 | MedB-5 | MedX | NEJM | Avg |
|--------------------------|-------|-------|--------|--------|------|--------|--------|--------|------|-------|-------|
| [HuatuoGPT-o1-72B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-72B) | **76.76** | 88.85 | **79.90** | 80.46 | 22.73 | 70.87 | 77.27 | 73.05 |23.53 |76.29 | 66.97 |
| [M1](https://huggingface.co/UCSC-VLAA/m1-7B-23K) | 62.54 | 75.81 | 75.80 | 65.86 | 15.51 | 62.62 | 63.64 | 59.74 |19.59 |64.34 | 56.55 |
| [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) | 66.53 | 81.38 | 73.9 | 77.85 | 42.27 | 66.26 | 68.83 | 62.66 |19.59 |69.65 | 62.89 |
| [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) | 74.18 | 88.92 | 76.1 | 80.7 | **47.08** | 72.33 | 72.27 | 71.42 |28.04 |76.94 | 68.80 |
| [MedGemma-27B-IT](https://huggingface.co/google/medgemma-27b-text-it) | 73.24 | 87.27 | 70.9 | 80.13 | 46.54| 70.14 | 75.32 | 73.37 |25.55 |76.28 | 67.87 |
| [II-Medical-8B](https://huggingface.co/Intelligent-Internet/II-Medical-8B) | 71.57 | 87.90 | 78.7 |**80.46** | 40.02| 70.38 | 78.25 | 72.07 |25.26 |73.13 |67.77 |
| [II-Medical-8B-1706](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706) | 74.44 | **88.61** | 79.8 | 81.04 | 46.8 | 71.60 | **80.84** | **74.67** |**29.63** |77.61 | **70.5** |
## IV. Dataset Curation
The training dataset comprises 2.3M samples from the following sources:
### 1. Public Medical Reasoning Datasets
- [General Medical Reasoning](https://huggingface.co/datasets/GeneralReasoning/GeneralThought-430K)
- [Medical-R1-Distill-Data](https://huggingface.co/datasets/FreedomIntelligence/Medical-R1-Distill-Data)
- [Medical-R1-Distill-Data-Chinese](https://huggingface.co/datasets/FreedomIntelligence/Medical-R1-Distill-Data-Chinese)
- [UCSC-VLAA/m23k-tokenized](https://huggingface.co/datasets/UCSC-VLAA/m23k-tokenized)
### 2. Synthetic Medical QA Data with Qwen3-235B-A22B
Generated from established medical datasets:
- [MedMcQA](https://huggingface.co/datasets/openlifescienceai/medmcqa)
- [MedQA](https://huggingface.co/datasets/bigbio/med_qa)
- [MedReason](https://huggingface.co/datasets/UCSC-VLAA/MedReason)
### 3. Curated Medical R1 Traces (338,055 samples)
First we gather all the public R1 traces from:
- [PrimeIntellect/SYNTHETIC-1](https://huggingface.co/collections/PrimeIntellect/synthetic-1-67a2c399cfdd6c9f7fae0c37)
- [GeneralReasoning/GeneralThought-430K](https://huggingface.co/datasets/GeneralReasoning/GeneralThought-430K)
- [a-m-team/AM-DeepSeek-R1-Distilled-1.4M](https://arxiv.org/abs/2503.19633v1)
- [open-thoughts/OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M)
- [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset): Science subset only
- Other resources: [cognitivecomputations/dolphin-r1](https://huggingface.co/datasets/cognitivecomputations/dolphin-r1), [ServiceNow-AI/R1-Distill-SFT](https://huggingface.co/datasets/ServiceNow-AI/R1-Distill-SFT),...
All R1 reasoning traces were processed through a domain-specific pipeline as follows:
1. Embedding Generation: Prompts are embedded using sentence-transformers/all-MiniLM-L6-v2.
2. Clustering: Perform K-means clustering with 50,000 clusters.
3. Domain Classification:
- For each cluster, select the 10 prompts nearest to the cluster center.
- Classify the domain of each selected prompt using Qwen2.5-32b-Instruct.
- Assign the cluster's domain based on majority voting among the classified prompts.
4. Domain Filtering: Keep only clusters labeled as Medical or Biology for the final dataset.
### 4. Other
- 269k prompts from [Text-Book-QA-subset](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus)
- 162k prompts from [Text-Patient-QA-subset](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus)
- 112k prompts from [Chat-Doctor-QA](https://huggingface.co/datasets/lavita/ChatDoctor-HealthCareMagic-100k)
- Subset Instruction-following from [a-m-team/AM-Qwen3-Distilled](https://huggingface.co/datasets/a-m-team/AM-Qwen3-Distilled)
### Preprocessing Data
1. Filtering for Complete Generation
- Retained only traces with complete generation outputs
2. Length-based Filtering
- Minimum threshold: Keep only the prompt with more than 3 words.
- Wait Token Filter: Removed traces with has more than 47 occurrences of "Wait" (97th percentile threshold).
3. Response Deduplicate
- Ngram: 4
- Jacard Threshold: 0.7
### Data Decontamination
We using two step decontamination:
1. Following [open-r1](https://github.com/huggingface/open-r1) project: We decontaminate a dataset using 10-grams with the evaluation datasets.
2. After that, we using the fuzzy decontamination from [`s1k`](https://arxiv.org/abs/2501.19393) method with threshold 90%.
**Our pipeline is carefully decontaminated with the evaluation datasets.**
## V. How To Use
Our model can be utilized in the same manner as Qwen or Deepseek-R1-Distill models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```bash
vllm serve Intelligent-Internet/II-Medical-8B-1706
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang):
```bash
python -m sglang.launch_server --model Intelligent-Internet/II-Medical-8B-1706
```
## VI. Usage Guidelines
- Recommended Sampling Parameters: temperature = 0.6, top_p = 0.9
- When using, explicitly request step-by-step reasoning and format the final answer within \boxed{} (e.g., "Please reason step-by-step, and put your final answer within \boxed{}.").
## VII. Limitations and Considerations
- Dataset may contain inherent biases from source materials
- Medical knowledge requires regular updates
- Please note that **Itβs not suitable for medical use.**
## VIII. Citation
```bib
@misc{2025II-Medical-8B-1706,
title={II-Medical-8B: Medical Reasoning Model},
author={Intelligent Internet},
year={2025}
}
```
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">π If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
π [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
π¬ **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What Iβm Testing**
Iβm pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
π‘ **TestLLM** β Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- β
**Zero-configuration setup**
- β³ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- π§ **Help wanted!** If youβre into **edge-device AI**, letβs collaborate!
### **Other Assistants**
π’ **TurboLLM** β Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
π΅ **HugLLM** β Latest Open-source models:
- π Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### π‘ **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) β. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! π
|
Trappu/Picaro-24b-2506-adapters-318
|
Trappu
| 2025-06-22T15:43:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML",
"base_model:adapter:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML",
"region:us"
] | null | 2025-06-21T23:52:42Z |
---
base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
BootesVoid/cmc7rh7cm09wfbfifqj3xwwmd_cmc7s9lfl09ypbfiffqilq412
|
BootesVoid
| 2025-06-22T15:38:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T15:38:51Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JESSIE
---
# Cmc7Rh7Cm09Wfbfifqj3Xwwmd_Cmc7S9Lfl09Ypbfiffqilq412
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JESSIE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "JESSIE",
"lora_weights": "https://huggingface.co/BootesVoid/cmc7rh7cm09wfbfifqj3xwwmd_cmc7s9lfl09ypbfiffqilq412/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc7rh7cm09wfbfifqj3xwwmd_cmc7s9lfl09ypbfiffqilq412', weight_name='lora.safetensors')
image = pipeline('JESSIE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc7rh7cm09wfbfifqj3xwwmd_cmc7s9lfl09ypbfiffqilq412/discussions) to add images that show off what youβve made with this LoRA.
|
purin000/llama3.2-1B-instruct-bnb-4bit-wikiqa
|
purin000
| 2025-06-22T15:29:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T11:35:01Z |
---
library_name: transformers
tags:
- llama-factory
base_model:
- unsloth/Llama-3.2-1B-Instruct-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chihangc/1823-whisper-finetune-20250621-2
|
chihangc
| 2025-06-22T15:26:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T10:25:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhngq/q-FrozenLake-v1-4x4-noSlippery
|
zhngq
| 2025-06-22T15:24:34Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T15:24:30Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zhngq/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MasterShomya/Tweets_Sentiment_Analyzer
|
MasterShomya
| 2025-06-22T15:21:45Z | 0 | 1 |
keras
|
[
"keras",
"joblib",
"sentiment-analysis",
"lstm",
"attention",
"scratch-model",
"text-classification",
"en",
"license:mit",
"region:us"
] |
text-classification
| 2025-06-22T15:03:08Z |
---
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- sentiment-analysis
- keras
- lstm
- attention
- scratch-model
---
# Sentiment Analysis from Scratch (LSTM + Attention)
This is a sentiment analysis model built entirely from scratch using a bidirectional LSTM architecture with an attention mechanism. The tokenizer is also trained from scratch on the dataset of 1.6 million tweets.
## Dataset Link
https://www.kaggle.com/datasets/mdraselsarker/sentiment140-dataset-with-1-6-million-tweets
## Kaggle Notebook Link
https://www.kaggle.com/code/mastershomya/sentiment-analysis-deep-bilstm
|
saad1926q/Qwen3-4B-brainrot
|
saad1926q
| 2025-06-22T15:19:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"brainrot",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T15:12:13Z |
---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
library_name: transformers
model_name: unsloth_training_checkpoints
tags:
- generated_from_trainer
- trl
- unsloth
- sft
- brainrot
---
# π§ unsloth_training_checkpoints (Qwen3-4B fine-tuned on Brainrot)
This is a fine-tuned version of [unsloth/Qwen3-4B-unsloth-bnb-4bit](https://huggingface.co/unsloth/Qwen3-4B-unsloth-bnb-4bit), trained using [TRL (Transformers Reinforcement Learning)](https://github.com/huggingface/trl) and [Unsloth AI](https://github.com/unslothai/unsloth).
The model was fine-tuned on the **[Brainrot-xK-large dataset](https://huggingface.co/datasets/GoofyLM/Brainrot-xK-large)** β a chaotic, Gen Z internet-flavored dataset full of memes, slang, sass, and semi-ironic wisdom.
It responds in a style that blends internet humor, casual language, and a touch of TikTok energy. Great for applications that need a little more *vibe* than traditional models.
---
## π£οΈ Example Outputs
### πΉ Prompt: βsummarize the story of one pieceβ
**Response:**
`<think>`
`</think>`
`omg, u literally need to read the whole thing, but here's a summary: the story is about a pirate named luffy, who wants to become the world's greatest pirate. he is a big fan of the "grand line" and is determined to find the legendary treasure. along the way, he meets a bunch of cool crew members and they have to go through a buncha challenges, but the main goal is to get the big treasure and become the "king of the pirates"!<|im_end|>`
---
### πΉ Prompt: βbest programming languageβ
**Response:**
`<think>`
`</think>`
`lol ik u r a noob, but r u tryin 2 get better at coding?<|im_end|>`
---
## π§ Training Details
The model was fine-tuned in two phases using SFT with Unsloth's optimized training approach:
### Phase 1
- **Epochs:** 45
- **Batch size:** 2 (with gradient accumulation)
- **Learning rate:** 2e-4
- **Optimizer:** `adamw_8bit`
- **Scheduler:** Linear
- **Seed:** 3407
- **Precision:** Full (fp16/bf16 disabled)
### Phase 2
- **Epochs:** 6 (continuation with same settings)
No evaluation dataset was used during fine-tuning. The dataset field used was `"text"`.
---
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
rubenchocron/gaussian_trigger_5_epochs
|
rubenchocron
| 2025-06-22T15:19:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | 2025-06-22T15:16:16Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
Celebrity-Mezzo-Fun-Leaked-Viral-Videos/trending.sex.now.Mezzo.fun.Viral.Video.Tutorial.Official
|
Celebrity-Mezzo-Fun-Leaked-Viral-Videos
| 2025-06-22T15:17:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T15:15:57Z |
[](https://t-me-viral-now01.blogspot.com/2025/06/ghds.html)
|
mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF
|
mradermacher
| 2025-06-22T15:15:52Z | 113 | 1 |
transformers
|
[
"transformers",
"gguf",
"moe",
"en",
"base_model:xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B",
"base_model:quantized:xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-05-05T23:05:57Z |
---
base_model: xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ1_M.gguf) | i1-IQ1_M | 6.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q2_K.gguf) | i1-Q2_K | 9.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 11.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ3_S.gguf) | i1-IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ3_M.gguf) | i1-IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 12.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q4_0.gguf) | i1-Q4_0 | 14.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q6_K.gguf) | i1-Q6_K | 20.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Juventus-Wydad-in-diretta-Video/Watch.Juventus.Wydad.AC.In.Diretta.Streaming.Gratis.Tv.Official
|
Juventus-Wydad-in-diretta-Video
| 2025-06-22T15:13:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T15:12:36Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/mrmpsap6?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Celebrity-Mezzo-Fun-Viral-Video/18.Original.Full.Clip.Mezzo.fun.Viral.Video.Tutorial.Official
|
Celebrity-Mezzo-Fun-Viral-Video
| 2025-06-22T15:11:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T15:11:19Z |
[](https://t-me-viral-now01.blogspot.com/2025/06/ghds.html)
|
qhchina/SikuBERT-verb-wuyan-singleline-0.2
|
qhchina
| 2025-06-22T15:11:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-22T15:10:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andrewsamce/reinforce-cartpole-v1
|
andrewsamce
| 2025-06-22T15:10:55Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T15:10:43Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
matthiasdevreede/hm-fashion-captioning-blip
|
matthiasdevreede
| 2025-06-22T15:10:20Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"blip",
"generated_from_trainer",
"base_model:Salesforce/blip-image-captioning-base",
"base_model:finetune:Salesforce/blip-image-captioning-base",
"license:bsd-3-clause",
"region:us"
] | null | 2025-06-22T12:51:26Z |
---
license: bsd-3-clause
base_model: Salesforce/blip-image-captioning-base
tags:
- generated_from_trainer
model-index:
- name: hm-fashion-captioning-blip
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hm-fashion-captioning-blip
This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0824 | 1.0 | 1477 | 0.0709 |
| 0.0664 | 2.0 | 2954 | 0.0668 |
| 0.0602 | 3.0 | 4431 | 0.0657 |
| 0.0584 | 4.0 | 5908 | 0.0657 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Celebrity-Mezzo-Fun-Viral-Video/FULL.VIDEO.LINK.Mezzo.fun.Viral.Video.Tutorial.Official.Mms
|
Celebrity-Mezzo-Fun-Viral-Video
| 2025-06-22T15:05:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T15:05:43Z |
[](https://t-me-viral-now01.blogspot.com/2025/06/ghds.html)
|
Pakcricketinfo-Sapna-Shah-18-Viral-Video/FULL.VIDEO.LINK.Pakcricketinfo.shah.sapna.Viral.Video.Tutorial.Official
|
Pakcricketinfo-Sapna-Shah-18-Viral-Video
| 2025-06-22T15:03:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T15:03:11Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
BootesVoid/cmc7raj9e09vlbfifyi1m7jbn_cmc7rl2ik09wtbfif2ijzxtwq
|
BootesVoid
| 2025-06-22T15:03:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T15:03:28Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LATINA
---
# Cmc7Raj9E09Vlbfifyi1M7Jbn_Cmc7Rl2Ik09Wtbfif2Ijzxtwq
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LATINA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LATINA",
"lora_weights": "https://huggingface.co/BootesVoid/cmc7raj9e09vlbfifyi1m7jbn_cmc7rl2ik09wtbfif2ijzxtwq/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc7raj9e09vlbfifyi1m7jbn_cmc7rl2ik09wtbfif2ijzxtwq', weight_name='lora.safetensors')
image = pipeline('LATINA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc7raj9e09vlbfifyi1m7jbn_cmc7rl2ik09wtbfif2ijzxtwq/discussions) to add images that show off what youβve made with this LoRA.
|
Celebrity-Mezzo-Fun-Leaked-Viral-Video/wAtCh_.Mezzo.Fun.Viral.Sex.Video.On.Social.Media.X.Trending.Now
|
Celebrity-Mezzo-Fun-Leaked-Viral-Video
| 2025-06-22T15:01:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T15:00:56Z |
[](https://t-me-viral-now01.blogspot.com/2025/06/ghds.html)
|
Celebrity-Mezzo-Fun-Leaked-Viral-Video/wAtCh_.Celebrity.Mezzo.Fun.Viral.Sex.Video.On.Social.Media.X.Trending.Nowedia.x.trending.now
|
Celebrity-Mezzo-Fun-Leaked-Viral-Video
| 2025-06-22T15:00:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T14:59:42Z |
[](https://t-me-viral-now01.blogspot.com/2025/06/ghds.html)
|
safe-llm-finetune/llama-3.2-1b-it-codeUltraFeedback-fullFT-lr1e-4-bs8
|
safe-llm-finetune
| 2025-06-22T14:59:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T14:49:06Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: llama-3.2-1b-it-codeUltraFeedback-fullFT-lr1e-4-bs8
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for llama-3.2-1b-it-codeUltraFeedback-fullFT-lr1e-4-bs8
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="safe-llm-finetune/llama-3.2-1b-it-codeUltraFeedback-fullFT-lr1e-4-bs8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/manon_k-saarland-informatics-campus/huggingface/runs/2r70y20o)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
CodeAid/refactoring_model
|
CodeAid
| 2025-06-22T14:58:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T00:06:01Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: refactoring_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# refactoring_model
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the solidRefactoring_finetune_train dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.115 | 0.3617 | 100 | 0.1121 |
| 0.1099 | 0.7233 | 200 | 0.1036 |
| 0.0668 | 1.0832 | 300 | 0.1008 |
| 0.0727 | 1.4448 | 400 | 0.0995 |
| 0.069 | 1.8065 | 500 | 0.0981 |
| 0.0407 | 2.1664 | 600 | 0.1093 |
| 0.0281 | 2.5280 | 700 | 0.1165 |
| 0.0249 | 2.8897 | 800 | 0.1158 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Celebrity-Mezzo-Fun-Leaked-Viral-Video/FULL.VIDEOS.Celebrity.Mezzo.Fun.Viral.Video.On.Social.Media.X.Trending.Now
|
Celebrity-Mezzo-Fun-Leaked-Viral-Video
| 2025-06-22T14:58:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T14:57:38Z |
[](https://t-me-viral-now01.blogspot.com/2025/06/ghds.html)
|
quantized4all/Polaris-7B-Preview-GGUF
|
quantized4all
| 2025-06-22T14:55:40Z | 0 | 0 | null |
[
"gguf",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-22T14:13:25Z |
---
license: apache-2.0
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
---
<div align="center">
<h1> POLARIS </h1>
<div>
π A <strong>PO</strong>st-training recipe for scaling R<strong>L</strong> on <strong>A</strong>dvanced <strong>R</strong>eason<strong>I</strong>ng model<strong>S</strong> π
</div>
</div>
<br>
<div align="center" style="line-height: 1;">
<a href="https://github.com/ChenxinAn-fdu/POLARIS" style="margin: 2px;">
<img alt="Code" src="https://img.shields.io/badge/POLARIS-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://honorable-payment-890.notion.site/POLARIS-A-POst-training-recipe-for-scaling-reinforcement-Learning-on-Advanced-ReasonIng-modelS-1dfa954ff7c38094923ec7772bf447a1" target="_blank" style="margin: 2px;">
<img alt="Blog" src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://x.com/AnChancy46881/status/1936099024769368506" style="margin: 2px;">
<img alt="Twitter" src="https://img.shields.io/badge/Twitter-white?style=for-the-badge&logo=twitter&logoColor=000&color=000&labelColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="commingsoon" style="margin: 2px;">
<img alt="Paper" src="https://img.shields.io/badge/Paper-%23000000.svg?style=for-the-badge&logo=arxiv&logoColor=000&color=000&labelColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## Overview
Polaris is an open-source post-training method that uses reinforcement learning (RL) scaling to refine and enhance models with advanced reasoning abilities. Our research shows that even top-tier models like Qwen3-4B can achieve significant improvements on challenging reasoning tasks when optimized with Polaris.
By leveraging open-source data and academic-level resources, Polaris pushes the capabilities of open-recipe reasoning models to unprecedented heights. In benchmark tests, our method even surpasses top commercial systems, including Claude-4-Opus, Grok-3-Beta, and o3-mini-high (2025/01/03).
## Polaris's Recipe
- **Data Difficulty:** Before training, Polaris analyzes and maps the distribution of data difficulty. The dataset should not be overwhelmed by either overly difficult or trivially easy problems. We recommend using a data distribution with a slight bias toward challenging problems, which typically exhibits a mirrored J-shaped distribution.
- **Diversity-Based Rollout:** We leverage the *diversity among rollouts* to initialize the sampling temperature, which is then progressively increased throughout the RL training stages.
- **Inference-Time Length:** Polaris incorporates length extrapolation techniques for generating longer CoT at inference stage. This enables a *"train-short, generate-long"* paradigm for CoT reasoning, mitigating the computational burden of training with excessively long rollouts .
- **Exploration Efficiency:** Exploration efficiency in Polaris is enhanced through multi-stage training. However, reducing the model's response length in the first stage poses potential risks. A more conservative approach would be to directly allow the model to "think longer" from the beginning.
The details of our training recipe and analysis can be found in our [blog post](https://hkunlp.github.io/blog/2025/Polaris).
The code and data for reproducing our results can be found in our [github repo](https://github.com/ChenxinAn-fdu/POLARIS).
### Evaluation Results
| **Models** | **AIME24 avg@32** | **AIME25 avg@32** | **Minerva Math avg@4** | **Olympiad Bench avg@4** | **AMC23 avg@8** |
| --- | --- | --- | --- | --- | --- |
| Deepseek-R1-Distill-Qwen-7B | 55.0 | 39.7 | 36.7 | 56.8 | 81.9 |
| AReal-boba-RL-7B | 61.9 | 48.3 | 39.5 | 61.9 | 86.4 |
| Skywork-OR1-7B-Math | 69.8 | 52.3 | **40.8** | 63.2 | 85.3 |
| **`POLARIS-7B-Preview`** | **72.6** | **52.6** | 40.2 | **65.4** | **89.0** |
| Deepseek-R1-Distill-Qwen-32B | 72.6 | 54.9 | 42.1 | 59.4 | 84.3 |
| qwen3-32B | 81.4 | 72.9 | 44.2 | 66.7 | 92.4 |
| qwen3-4B | 73.8 | 65.6 | 43.6 | 62.2 | 87.2 |
| **`POLARIS-4B-Preview`** | **81.2** | **79.4** | **44.0** | **69.1** | **94.8** |
## Acknowledgements
The training and evaluation codebase is heavily built on [Verl](https://github.com/volcengine/verl). The reward function in polaris in from [DeepScaleR](https://github.com/agentica-project/rllm). Our model is trained on top of [`Qwen3-4B`](https://huggingface.co/Qwen/Qwen3-4B) and [`DeepSeek-R1-Distill-Qwen-7B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B). Thanks for their wonderful work.
## Citation
```bibtex
@misc{Polaris2025,
title = {POLARIS: A Post-Training Recipe for Scaling Reinforcement Learning on Advanced Reasoning Models},
url = {https://hkunlp.github.io/blog/2025/Polaris},
author = {An, Chenxin and Xie, Zhihui and Li, Xiaonan and Li, Lei and Zhang, Jun and Gong, Shansan and Zhong, Ming and Xu, Jingjing and Qiu, Xipeng and Wang, Mingxuan and Kong, Lingpeng}
year = {2025}
}
```
|
Celebrity-Mezzo-Fun-Leaked-Viral-Video/FULL.VIDEOS.Celebrity.Mezzo.Fun.Leaked.Viral.Video.On.Social.Media
|
Celebrity-Mezzo-Fun-Leaked-Viral-Video
| 2025-06-22T14:54:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T14:52:01Z |
[](https://t-me-viral-now01.blogspot.com/2025/06/ghds.html)
|
tongliuphysics/Qwen2.5-3B-MATH-lighteval-gen-SFT-8epoch-2
|
tongliuphysics
| 2025-06-22T14:53:26Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:ebony59/MATH-lighteval-gen-correct",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T21:23:45Z |
---
base_model: Qwen/Qwen2.5-3B
datasets: ebony59/MATH-lighteval-gen-correct
library_name: transformers
model_name: Qwen2.5-3B-MATH-lighteval-gen-SFT-8epoch-2
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-3B-MATH-lighteval-gen-SFT-8epoch-2
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) on the [ebony59/MATH-lighteval-gen-correct](https://huggingface.co/datasets/ebony59/MATH-lighteval-gen-correct) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tongliuphysics/Qwen2.5-3B-MATH-lighteval-gen-SFT-8epoch-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ebony5951/qwen-math-sft/runs/3x31izi9)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
safe-llm-finetune/llama-3.2-1b-it-codeUltraFeedback-qlora-r8
|
safe-llm-finetune
| 2025-06-22T14:50:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T14:32:26Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: llama-3.2-1b-it-codeUltraFeedback-qlora-r8
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama-3.2-1b-it-codeUltraFeedback-qlora-r8
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="safe-llm-finetune/llama-3.2-1b-it-codeUltraFeedback-qlora-r8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/manon_k-saarland-informatics-campus/huggingface/runs/js2fh0ms)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Joylimjy/LecEval
|
Joylimjy
| 2025-06-22T14:48:45Z | 22 | 0 | null |
[
"safetensors",
"minicpmv",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"base_model:openbmb/MiniCPM-Llama3-V-2_5",
"base_model:finetune:openbmb/MiniCPM-Llama3-V-2_5",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-06-16T03:02:24Z |
---
license: mit
language:
- en
base_model:
- openbmb/MiniCPM-Llama3-V-2_5
pipeline_tag: image-text-to-text
---
|
aaryanbitw/aaryan
|
aaryanbitw
| 2025-06-22T14:46:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T14:04:38Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: aaryan
---
# Aaryan
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `aaryan` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "aaryan",
"lora_weights": "https://huggingface.co/aaryanbitw/aaryan/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('aaryanbitw/aaryan', weight_name='lora.safetensors')
image = pipeline('aaryan').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/aaryanbitw/aaryan/discussions) to add images that show off what youβve made with this LoRA.
|
jenil7/xlm-roberta-base-finetuned-panx-de
|
jenil7
| 2025-06-22T14:43:06Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-21T13:48:28Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1328
- F1: 0.8675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2611 | 1.0 | 525 | 0.1439 | 0.8280 |
| 0.1231 | 2.0 | 1050 | 0.1302 | 0.8579 |
| 0.0761 | 3.0 | 1575 | 0.1328 | 0.8675 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
zecaihong/999e249f-6b05-4a37-9bc6-b4556645f48a.4
|
zecaihong
| 2025-06-22T14:40:12Z | 73 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-06-22T04:36:57Z |
---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 999e249f-6b05-4a37-9bc6-b4556645f48a.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9b229213575401f4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/999e249f-6b05-4a37-9bc6-b4556645f48a.4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
metric_for_best_model: eval_loss
micro_batch_size: 12
mlflow_experiment_name: /data/datasets/9b229213575401f4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 999e249f-6b05-4a37-9bc6-b4556645f48a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 999e249f-6b05-4a37-9bc6-b4556645f48a
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# 999e249f-6b05-4a37-9bc6-b4556645f48a.4
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- total_eval_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | 1.9995 |
| 1.7291 | 0.0554 | 50 | 1.7795 |
| 1.5663 | 0.1107 | 100 | 1.5813 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
JeloH/f3p_qwen-textgen-modelV_Mjj2_SRC_Ass
|
JeloH
| 2025-06-22T14:37:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T14:35:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-Clip-shah-sapna-kumari-18-Viral-Video/FULL.VIDEO.LINK.shah.sapna.Viral.Video.Tutorial.Official
|
New-Clip-shah-sapna-kumari-18-Viral-Video
| 2025-06-22T14:37:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T14:36:33Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
saital/Qwen3-0.6B-math-1k-3epoch
|
saital
| 2025-06-22T14:35:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B",
"base_model:finetune:unsloth/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T14:25:24Z |
---
base_model: unsloth/Qwen3-0.6B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** saital
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ljnlonoljpiljm/siglip2-large-patch16-256-like-dislike-6
|
ljnlonoljpiljm
| 2025-06-22T14:32:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-22T14:32:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sahron/sentiment-indobert1aa_model
|
Sahron
| 2025-06-22T14:32:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"indoebert",
"sentiment-analysis",
"fine-tuned",
"twitter",
"id",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-22T14:01:34Z |
---
license: apache-2.0
language:
- id
metrics:
- accuracy
- f1
- precision
- recall
base_model:
- indobenchmark/indobert-base-p1
pipeline_tag: text-classification
library_name: transformers
tags:
- indoebert
- sentiment-analysis
- fine-tuned
- twitter
---
# IndoBERT Sentiment Analysis
Model ini merupakan hasil fine-tuning dari **indobenchmark/indobert-base-p1** untuk tugas klasifikasi sentimen dalam bahasa Indonesia.
## β¨ Dataset
Scrapping Twitter/X terkumpul sebanyak 15.027 tweet
## β¨ Proses Preprocessing
- Hapus Duplikat
- Cleaning Data
- Case Folding
- Normalisasi Kata
## β¨ Indonesia Sentimen Lexicon
by: Fajri Koto(GitHub @fajri91)
- Label Sentimen: Positive, Negative, Neutral
- Positive.tsv: 3610 kata positive
- Negative.tsv: 6608 kata negative
## β¨ Split Dataset
- Train : 80%
- Val : 10%
- Test : 10%
## β¨ Training Configuration Indobert
- set_seed : 42
- Model : indobenchmark/indobert-base-p1
- Max Seq Length: 256
- Batch Size : 32
- Num_workers : 2
- Optimizer : Adam
- Learning Rate : 2e-5
- Weigth_decay : 0.02
- Epochs : 5
### Framework Versions
* Transformers 4.51.3
* Pytorch 2.6.0+cu124
* Tokenizers 0.21.1
|
himedia/fincredit-Phi-3.5-mini-lr2e04-bs16-r32-steps1000-20250622_131709
|
himedia
| 2025-06-22T14:31:33Z | 0 | 0 | null |
[
"safetensors",
"financial",
"credit-rating",
"korean",
"llama",
"unsloth",
"fine-tuned",
"text-generation",
"conversational",
"ko",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:finetune:unsloth/Phi-3.5-mini-instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-22T14:31:14Z |
---
language: ko
license: apache-2.0
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- financial
- credit-rating
- korean
- llama
- unsloth
- fine-tuned
model_name: FinCreditPhi-3.5-mini
pipeline_tag: text-generation
---
# FinCreditPhi-3.5-mini
## λͺ¨λΈ κ°μ
FinCreditPhi-3.5-miniλ κΈμ΅ μ μ© νκ°λ₯Ό μν΄ νΉλ³ν μ€κ³λ νκ΅μ΄ μΈμ΄ λͺ¨λΈμ
λλ€.
**λ² μ΄μ€ λͺ¨λΈ**: unsloth/Phi-3.5-mini-instruct
**λ°μ΄ν°μ
**: himedia/financial_dummy_data_v4
**νμ΅ λ°©λ²**: LoRA (Low-Rank Adaptation)
**νμ΅ μΌμ**: 20250622_131709
## π νμ΅ κ²°κ³Ό
- **Final Training Loss**: 0.1521
- **Final Validation Loss**: 0.1550
- **Best Validation Loss**: 0.1550 (step 1000)
- **Overall Improvement**: 87.0%
- **Training Time**: 73.66 minutes
## νμ΄νΌνλΌλ―Έν°
- **Learning Rate**: 0.0002
- **Max Steps**: 1000
- **Batch Size**: 4
- **Gradient Accumulation**: 4
- **LoRA r**: 32
- **LoRA alpha**: 32
- **Max Sequence Length**: 2048
- **Warmup Steps**: 5
## π§ λ©λͺ¨λ¦¬ μ¬μ©λ
- **GPU**: NVIDIA RTX A5000
- **Peak Memory**: 6.381 GB
- **Memory Usage**: 27.1%
## μ¬μ© λ°©λ²
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# λͺ¨λΈκ³Ό ν ν¬λμ΄μ λ‘λ
tokenizer = AutoTokenizer.from_pretrained("himedia/fincredit-Phi-3.5-mini-lr2e04-bs16-r32-steps1000-20250622_131709")
model = AutoModelForCausalLM.from_pretrained("himedia/fincredit-Phi-3.5-mini-lr2e04-bs16-r32-steps1000-20250622_131709")
# κ°λ¨ν μΆλ‘ μμ
prompt = "κ³ κ°μ μ μ©λ±κΈμ νκ°ν΄μ£ΌμΈμ:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## π νμ΅ λ°μ΄ν° νμΌ
μ΄ λ ν¬μ§ν 리μλ λ€μ νμ΅ κ΄λ ¨ νμΌλ€μ΄ ν¬ν¨λμ΄ μμ΅λλ€:
- `training_log.json`: μ 체 νμ΅ λ‘κ·Έ (JSON νμ)
- `FinCreditPhi-3.5-mini_20250622_131709_training_curves.png`: νμ΅ κ³‘μ μκ°ν μ΄λ―Έμ§
## λ ν¬μ§ν 리λͺ
ꡬμ±
```
fincredit-Phi-3.5-mini-lr2e04-bs16-r32-steps1000-20250622_131709 = fincredit-lamma3-4b-lr2e04-bs4-r32-steps1000-20250622_131709
```
- `fincredit-lamma3-4b`: λͺ¨λΈ κΈ°λ³Έλͺ
- `lr2e04`: Learning Rate
- `bs4`: Batch Size
- `r32`: LoRA rank
- `steps1000`: νμ΅ μ€ν
- `20250622_131709`: νμ΅ μκ°
## μ±λ₯
μ΄ λͺ¨λΈμ νκ΅μ΄ κΈμ΅ ν
μ€νΈμ λν΄ νμΈνλλμ΄ μ μ© νκ° κ΄λ ¨ μ§μμλ΅μ νΉνλμ΄ μμ΅λλ€.
## λΌμ΄μ μ€
Apache 2.0
|
zecaihong/70b0848d-3605-4e32-bbaa-b0724c7fc074.4
|
zecaihong
| 2025-06-22T14:30:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T11:32:41Z |
---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 70b0848d-3605-4e32-bbaa-b0724c7fc074.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aa5f7984f6c8a016_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/70b0848d-3605-4e32-bbaa-b0724c7fc074.4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
metric_for_best_model: eval_loss
micro_batch_size: 12
mlflow_experiment_name: /data/datasets/aa5f7984f6c8a016_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 70b0848d-3605-4e32-bbaa-b0724c7fc074
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 70b0848d-3605-4e32-bbaa-b0724c7fc074
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# 70b0848d-3605-4e32-bbaa-b0724c7fc074.4
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- total_eval_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 1.6114 |
| 1.2665 | 0.0773 | 50 | 1.2738 |
| 1.1829 | 0.1547 | 100 | 1.1914 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
gumran/gpt2-dpo
|
gumran
| 2025-06-22T14:30:32Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:gumran/gpt2-sft",
"base_model:finetune:gumran/gpt2-sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-06T16:26:28Z |
---
base_model: gumran/gpt2-sft
library_name: transformers
model_name: gpt2-dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for gpt2-dpo
This model is a fine-tuned version of [gumran/gpt2-sft](https://huggingface.co/gumran/gpt2-sft).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gumran/gpt2-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1+cu118
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
new-18-hot-sapna-shah-viral-video/18.hot.sapna.shah.viral.video.tiktoker.jobz.hunting.pakistan
|
new-18-hot-sapna-shah-viral-video
| 2025-06-22T14:30:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T14:29:25Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3myjh3p6?new-leaked-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
hazhu/mlxDeepDanbooru
|
hazhu
| 2025-06-22T14:29:00Z | 0 | 0 |
mlx
|
[
"mlx",
"DeepDanbooru",
"danbooru",
"Image-Clip",
"image-interrogate",
"image-to-text",
"captioning",
"en",
"zh",
"base_model:hazhu/mlxDeepDanbooru",
"base_model:finetune:hazhu/mlxDeepDanbooru",
"license:mit",
"region:us"
] |
image-to-text
| 2025-06-21T07:34:04Z |
---
language:
- "en"
- "zh"
tags:
- mlx
- DeepDanbooru
- danbooru
- Image-Clip
- image-interrogate
- image-to-text
- captioning
license: "mit"
base_model: "hazhu/mlxDeepDanbooru"
---
# mlxDeepDanbooru
Pure MLX implementation of DeepDanbooru Neural Network for __Apple Silicon Chips__: M1, M2, M3, M4;
`mlxDeepDanBooru` is available for: MacBook Pro / Air, Mac mini, iMac.
## Usage
Image-to-Text, captioning, CLIP by using [DeepDanBooru Model](https://github.com/KichangKim/DeepDanbooru) on Apple Devices.
## MLX DeepDanBooru Model
This MLX DeepDanBooru Model implementation is inspired by a PyTorch implementation of [AUTOMATIC1111/TorchDeepDanbooru](https://github.com/AUTOMATIC1111/TorchDeepDanbooru)
## Installation
```
conda create -n mlx026 python=3.12
conda activate mlx026
#
pip install numpy
pip install pillow
```
MLX is available on [PyPI](https://pypi.org/project/mlx/). To install the Python API, run:
```
pip install mlx
```
`mlxDeepDanbooru` is base on `mlx` version: `0.26.1`
## Inference
```
python infer.py
```
Image Interrogate:
```python
import numpy as np
from PIL import Image, ImageDraw
# using apple silicon's MLX
# not Pytorch
import mlx.core as mx
from mlxDeepDanBooru.mlx_deep_danbooru_model import mlxDeepDanBooruModel
model_path = "models/model-resnet_custom_v3_mlx.npz"
tags_path = 'models/tags-resnet_custom_v3_mlx.npy'
mlx_dan = mlxDeepDanBooruModel()
mlx_dan.load_weights(model_path)
mx.eval(mlx_dan.parameters())
model_tags = np.load(tags_path)
print(f'total tags: {len(model_tags)}')
def danbooru_tags(fpath):
tags = []
pic = Image.open(fpath).convert("RGB").resize((512, 512))
a = np.expand_dims(np.array(pic, dtype=np.float32), 0) / 255
x = mx.array(a)
y = mlx_dan(x)[0]
for n in range(10):
mlx_dan(x)
for i, p in enumerate(y):
if p >= 0.5:
# 0.5 can be changed for demand: 0.0 ~ 1.0
#print(model_tags[i].item(), p)
tags.append(model_tags[i].item())
return tags
image_count = 0
def image_infer(fpath):
global image_count
tags = danbooru_tags(fpath)
image_count += 1
return tags
t1 = time.time()
tags_1 = image_infer("example/1.png")
tags_2 = image_infer("example/2.png")
t2 = time.time()
print(tags_1)
# will show tags: ['1girl', 'beach', 'black_hair', 'blurry', 'blurry_background', 'blurry_foreground', 'building', 'bush', 'christmas_tree', 'day', 'depth_of_field', 'field', 'grass', 'lake', 'looking_at_viewer', 'mountain', 'nature', 'outdoors', 'palm_leaf', 'palm_tree', 'park', 'park_bench', 'path', 'photo_background', 'plant', 'river', 'road', 'skirt', 'sky', 'smile', 'striped', 'striped_dress', 'striped_shirt', 'tree', 'vertical-striped_shirt', 'vertical_stripes', 'rating:safe']
print(tags_2)
# will show tags: ['1girl', '3d', 'blurry', 'blurry_background', 'blurry_foreground', 'brown_eyes', 'brown_hair', 'bush', 'christmas_tree', 'cosplay_photo', 'day', 'depth_of_field', 'field', 'floral_print', 'foliage', 'forest', 'garden', 'grass', 'jungle', 'lips', 'long_hair', 'long_sleeves', 'looking_at_viewer', 'nature', 'on_grass', 'outdoors', 'palm_tree', 'park', 'path', 'plant', 'potted_plant', 'realistic', 'smile', 'solo', 'tree', 'upper_body', 'white_dress', 'rating:safe']
print("-----------")
print(f'infer speed(with mlx): {(t2 - t1)/image_count} seconds per image')
```
## Performance
In the `example` folder, 1024x1024 pixl,
On Mac Mini M4, `MLX DeepDanBooru Model` inference Speed:
```
1.7 seconds per image
```
On Mac Mini M4, __MPS + Pytorch__ inference Speed: `0.8 seconds per image`
On Mac Mini M4, CPU + Pytorch inference Speed: `2.5 seconds per image`
## CURRENTLY
the speed of __MPS + Pytorch__ > MLX.

## Bench: 351 images, 720x1280 and 540x720:
In Windows 11, Nvidia RTX 4070 Ti, CUDA+Pytorch:
```
SPEED: 0.3 seconds per image
Power Consumption: 260 ~ 300 Watt
```
In Mac mini M4, `mlxDeepDanBooru`:
```
SPEED: 1.68 seconds per image
Power Consumption: 8 ~ 12 Watt
```
|
AlbertBik/q-Taxi-v3
|
AlbertBik
| 2025-06-22T14:26:19Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T14:26:17Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AlbertBik/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RikutarouPrime/ppo-LunarLander-v2
|
RikutarouPrime
| 2025-06-22T14:24:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T14:23:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.49 +/- 17.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JeloH/ggqwen-textgen-modelV_Mjj2_SRC_Ass
|
JeloH
| 2025-06-22T14:22:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T14:20:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.