modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 12:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 12:28:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
stanfordnlp/stanza-xcl
|
stanfordnlp
| 2025-06-17T00:55:50Z | 12 | 0 |
stanza
|
[
"stanza",
"token-classification",
"xcl",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2024-06-23T05:53:39Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: xcl
license: apache-2.0
---
# Stanza model for Classical_Armenian (xcl)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-06-17 00:55:43.963
|
mradermacher/Fallen-Llama-3.3-70B-v1-GGUF
|
mradermacher
| 2025-06-17T00:42:34Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TheDrummer/Fallen-Llama-3.3-70B-v1",
"base_model:quantized:TheDrummer/Fallen-Llama-3.3-70B-v1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T16:00:43Z |
---
base_model: TheDrummer/Fallen-Llama-3.3-70B-v1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TheDrummer/Fallen-Llama-3.3-70B-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Fallen-Llama-3.3-70B-v1-GGUF/resolve/main/Fallen-Llama-3.3-70B-v1.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sararob/book-recommender
|
sararob
| 2025-06-17T00:29:18Z | 11 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"onnx",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:3",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:quantized:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-13T11:58:24Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:3
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 3 samples:
| | sentence_0 | sentence_1 |
|:--------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 6.67 tokens</li><li>max: 8 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 9.0 tokens</li><li>max: 11 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------|:----------------------------------------------|
| <code>guide to healthy cooking</code> | <code>cookbook for nutritious meals</code> |
| <code>a thrilling space opera</code> | <code>an epic interstellar adventure</code> |
| <code>cozy mystery in a small town</code> | <code>whodunit set in a quaint village</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.13.0
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.7.0.dev20250125
- Accelerate: 1.3.0
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
MinaMila/llama_instbase_LoRa_GermanCredit_cfda_ep5_22
|
MinaMila
| 2025-06-17T00:14:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T19:11:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
areebg9-hf/finetuning_llama_sft1
|
areebg9-hf
| 2025-06-17T00:10:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T00:10:01Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** areebg9-hf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qcw2333/YingLong_110m
|
qcw2333
| 2025-06-16T23:33:03Z | 348 | 0 |
YingLong
|
[
"YingLong",
"safetensors",
"time-series",
"forecasting",
"foundation-models",
"pretrained-models",
"time-series-foundation-models",
"large-time-series-models",
"time-series-forecasting",
"custom_code",
"arxiv:2506.11029",
"arxiv:2410.04803",
"license:cc-by-4.0",
"region:us"
] |
time-series-forecasting
| 2025-05-17T13:08:07Z |
---
library_name: YingLong
license: cc-by-4.0
pipeline_tag: time-series-forecasting
tags:
- time-series
- forecasting
- foundation-models
- pretrained-models
- time-series-foundation-models
- large-time-series-models
---
# YingLong
YingLong model is introduced in this [paper](https://huggingface.co/papers/2506.11029). This version is pre-trained on **78B** time points. More details can be found at our [github](https://github.com/wxie9/YingLong/) and on our [project page](https://huggingface.co/qcw1314/YingLong_300m).
## Quickstart
```bash
pip install xformers transformers
pip install flash-attn --no-build-isolation
git clone https://github.com/Dao-AILab/flash-attention && cd flash-attention
cd csrc/rotary && pip install .
cd ../layer_norm && pip install .
```
The flash attention is not required. If you use V100 or other GPU doesn't support flash attention, just change the FlashAttention2Available = RequirementCache("flash-attn>=2.0.0.post1") to
FlashAttention2Available = False in the model.py file. It should be able to run.
```python
import torch
from transformers import AutoModelForCausalLM
# load pretrain model
model = AutoModelForCausalLM.from_pretrained('qcw2333/YingLong_110m', trust_remote_code=True,torch_dtype=torch.bfloat16).cuda()
# prepare input
batch_size, lookback_length = 1, 2880
seqs = torch.randn(batch_size, lookback_length).bfloat16().cuda()
# generate forecast
prediction_length = 96
output = model.generate(seqs, future_token=prediction_length)
print(output.shape)
```
A notebook example is also provided [here](https://github.com/wxie9/YingLong/blob/main/quickstart_zero_shot.ipynb). The sample codes for long-term forecasting tasks and gift-eval tasks are provided at [link](https://github.com/wxie9/YingLong/tree/main).
<!-- ## Specification -->
## Citation
Coming soon...
<!-- ```
@inproceedings{liutimer,
title={Timer: Generative Pre-trained Transformers Are Large Time Series Models},
author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
booktitle={Forty-first International Conference on Machine Learning}
}
@article{liu2024timer,
title={Timer-XL: Long-Context Transformers for Unified Time Series Forecasting},
author={Liu, Yong and Qin, Guo and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
journal={arXiv preprint arXiv:2410.04803},
year={2024}
}
``` -->
## Contact
If you have any questions or want to use the code, feel free to contact:
Xue Wang ([email protected])
Tian Zhou ([email protected])
## License
This model is licensed under the cc-by-4.0 License.
|
MinaMila/llama_instbase_LoRa_GermanCredit_ep4_42
|
MinaMila
| 2025-06-16T23:24:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T23:42:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MinaMila/llama_instbase_LoRa_GermanCredit_ep9_33
|
MinaMila
| 2025-06-16T23:15:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T20:40:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
marcelobarcia/emprendeia
|
marcelobarcia
| 2025-06-16T23:14:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-15T19:42:56Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Marcelo Barcia
---
# Emprendeia
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Marcelo Barcia` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Marcelo Barcia",
"lora_weights": "https://huggingface.co/marcelobarcia/emprendeia/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('marcelobarcia/emprendeia', weight_name='lora.safetensors')
image = pipeline('Marcelo Barcia').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/marcelobarcia/emprendeia/discussions) to add images that show off what you’ve made with this LoRA.
|
gsdfg18919/melo
|
gsdfg18919
| 2025-06-16T23:09:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-16T23:09:11Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/270205.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: melody
---
# melo
<Gallery />
## Trigger words
You should use `melody` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/gsdfg18919/melo/tree/main) them in the Files & versions tab.
|
BootesVoid/cmbzezmyx05l4rdqsas10fwna_cmbznz9kn06fqrdqs26mekduu
|
BootesVoid
| 2025-06-16T23:01:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-16T23:01:21Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TIFFANY
---
# Cmbzezmyx05L4Rdqsas10Fwna_Cmbznz9Kn06Fqrdqs26Mekduu
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TIFFANY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TIFFANY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbzezmyx05l4rdqsas10fwna_cmbznz9kn06fqrdqs26mekduu/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbzezmyx05l4rdqsas10fwna_cmbznz9kn06fqrdqs26mekduu', weight_name='lora.safetensors')
image = pipeline('TIFFANY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbzezmyx05l4rdqsas10fwna_cmbznz9kn06fqrdqs26mekduu/discussions) to add images that show off what you’ve made with this LoRA.
|
sil-ai/madlad400-finetuned-tpi-suo
|
sil-ai
| 2025-06-16T20:56:36Z | 50 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | null | 2025-06-05T09:36:02Z |
---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-tpi-suo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# madlad400-finetuned-tpi-suo
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3168
- Chrf: 72.8553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.4338 | 8.7312 | 1600 | 0.3361 | 71.8529 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1
|
faizabenatmane/Qwen-3-0.6B
|
faizabenatmane
| 2025-06-16T20:56:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-0.6B",
"base_model:adapter:Qwen/Qwen3-0.6B",
"region:us"
] | null | 2025-06-16T20:55:18Z |
---
base_model: Qwen/Qwen3-0.6B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
ahmedheakl/r1qw7B-sve-foundational-r1-gemini-ratsuccess
|
ahmedheakl
| 2025-06-16T20:34:12Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T17:31:06Z |
---
library_name: transformers
tags:
- llama-factory
- generated_from_trainer
model-index:
- name: r1qw7B-sve-foundational-r1-gemini-ratsuccess
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# r1qw7B-sve-foundational-r1-gemini-ratsuccess
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- total_eval_batch_size: 48
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
Goldya02/photo_collage_in_FOBS_styleA
|
Goldya02
| 2025-06-16T20:03:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-16T18:36:30Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo_collage_in_FOBS_styleA
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Goldya02/photo_collage_in_FOBS_styleA
<Gallery />
## Model description
These are Goldya02/photo_collage_in_FOBS_styleA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo_collage_in_FOBS_styleA to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Goldya02/photo_collage_in_FOBS_styleA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Lelon/cue-en-conan
|
Lelon
| 2025-06-16T20:02:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-16T20:02:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
parbin-sultana/wATCH.parbin.sultana.viral.video.original
|
parbin-sultana
| 2025-06-16T19:50:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T19:46:35Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=parbin-sultana)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=parbin-sultana)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=parbin-sultana)
|
bilalfaye/flan-t5-xl-rpo
|
bilalfaye
| 2025-06-16T19:33:30Z | 0 | 0 | null |
[
"safetensors",
"t5",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"region:us"
] | null | 2025-06-16T16:57:29Z |
---
base_model:
- google/flan-t5-small
- google/flan-t5-large
- google/flan-t5-xl
---
## 🧠 Flan-T5-{Small|Large|XL}-RPO
> 🔬 Fine-tuned with **Reward Partitioning Optimization (RPO)** — a value-free, stable method for single-trajectory reinforcement learning with scalar feedback.
---
### 📌 Model Summary
This model is a fine-tuned variant of the [Flan-T5](https://huggingface.co/google/flan-t5) {Small|Large|XL} checkpoint, trained using **Reward Partitioning Optimization (RPO)**. RPO is a new method designed for learning from single-trajectory scalar feedback (e.g., thumbs up/down), and eliminates the need for learning value functions or preference pairs.
* ✅ Trained with only (prompt, response, reward) triplets.
* 🔁 No joint optimization, no auxiliary models.
* 🚀 Efficient and stable training.
* 🤖 Strong preference alignment (evaluated by LLM-as-a-judge).
* 📊 Outperforms KTO and DRO in automatic metrics and LLM preference winrate.
---
### 🧪 Training Details
* **Base Model:** `flan-t5-{small|large|xl}`
* **Dataset:** [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) — high-quality (prompt, response, reward) triplets with multiple completions per prompt.
* **Feedback Format:** scalar reward (e.g., \[prompt, response, reward]).
* **GPU Used:** 1× A100 (80GB)
* **Training Objective:** RPO supervised learning using partitioned reward normalization.
* **Baselines Compared:** DRO and KTO.
---
### 🤖 Inference
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name = "bilalfaye/flan-t5-{small|large|xl}-rpo"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
prompt = "How can I improve my productivity working from home?"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(
input_ids=inputs["input_ids"],
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
repetition_penalty=1.2,
no_repeat_ngram_size=3,
)
response = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(response)
```
---
### 📈 Evaluation Summary
| Judge | Win Rate vs DRO | Win Rate vs KTO | Win Rate vs SFT |
| ------- | --------------- | --------------- | --------------- |
| Mistral | ✅ **83–93%** | ✅ **82–93%** | ✅ **82–84%** |
| LLaMA | ✅ **67–74%** | ✅ **65–72%** | ✅ **63–73%** |
---
### ✅ Use Cases
* Aligned conversational agents
* Helpful, non-toxic instruction following
* Scalar feedback training pipelines
* Preference-optimized generation (without pairwise preference labels)
---
### 📚 Citation
If you use this model, please cite the following paper:
```bibtex
@article{faye2025rpo,
title = {Value-Free Policy Optimization via Reward Partitioning},
author = {Bilal Faye and Hanane Azzag and Mustapha Lebbah},
journal = {arXiv preprint arXiv:2406.XXXX},
year = {2025}
}
```
---
### 🔗 Related Models
* `bilalfaye/flan-t5-small-rpo`
* `bilalfaye/flan-t5-large-rpo`
* `bilalfaye/flan-t5-xl-rpo`
|
panchayatseason4/Panchayat.Season.4.download.720p
|
panchayatseason4
| 2025-06-16T19:32:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T19:32:05Z |
---
[license: apache-2.0
](https://Moviezila.short.gy/Panchayat_Season_4
Panchayat Season 4 Download in Hindi Tamil
Panchayat Season 4 movie download filmywap
Panchayat Season 4 movie download mp4moviez in hindi filmyzilla
Panchayat Season 4 download filmywap in hindi)---
|
01-Katrina-Lim-kify-Viral-videos-tv/katrina.lim.viral.kiffy.viral.video.link
|
01-Katrina-Lim-kify-Viral-videos-tv
| 2025-06-16T19:13:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T19:11:28Z |
18 seconds ago
<a href="https://tv2online.com/Video/?video" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Video/?video" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?video"><img border="Viral+Leaked+Video" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Katrina Lim Viral Kiffy Video Tutorial Original Video video oficial twitter
L𝚎aked Video Katrina Lim Viral Kiffy Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
L𝚎aked Video Katrina Lim Viral Kiffy Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Katrina Lim Viral Kiffy Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
New tracks tagged Katrina lim kiffy. Latest Tracks · Most Popular · Playlists · [Original+New]^ Katrina Lim kiffy Viral Video Full Video Original Link on X
Katrina Lim Viral Kiffy Video: The Real Story Behind
What is the Katrina Lim viral Kiffy video? It is an explicit video that was leaked without consent, involving digital influencer Katrina Lim.
[Original VIDEO] Katrina Lim viral Kiffy Viral Video Full Video
Katrina Lim Kiffy, a rising digital content creator, recently went viral after a leaked video began circulating across various social media platforms, including
VIDEO 18+Katrina Lim Viral Kiffy Viral Full Video
18 seconds ago
<a href="https://tv2online.com/Video/?video" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Video/?video" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?video"><img border="Viral+Leaked+Video" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Social media star Video been posting short Videoos and naughty pics on Tiktok platform for a while now. The purported leak has stirred a maelanage of reactions,
+videos)!* Katrina lim kiffy viral new original video clip
Video, a young and talented digital creator, recently became famous thanks to this interesting video. Video Link 1. Video Link 2. Katrina lim
Stream Video: Viral Katrina Lim Kiffy Full video Link
Stream Video: Viral Katrina Lim Kiffy Full video Link
We're on a journey to advance and democratize artificial intelligence through open source and open science.
Stream New (LINK) Video Puno@tg 18+ katrina lim kiffy twitter
Stream New (LINK) Video Puno@tg 18+ katrina lim kiffy twitter & telegram katrina lim katrina lim kiffy. [-FULL-VIRAL-]— Actress katrina lim Video Original
Music tracks, songs, playlists tagged katrina on SoundCloud
] Katrina lim viral kiffy Viral Video Full Original Video Viral On Social Media X zbu Katrina lim kiffy viral video Full original LINK HD NOW Trending.
Katrina Lim Viral Kiffy Video: The Real Story Behind the Social Media Frenzy
What is the Katrina Lim viral Kiffy video? It is an explicit video that was leaked without consent, involving digital influencer Katrina Lim.
Who Is Katrina Lim? The Truth Behind the ‘Kiffy’ Viral Storm
Katrina LimKitrina Lim is a Southeast Asian influencer whose alleged involvement in a leaked video caused her name to trend across social media
Arabella Stanton: Meet the New Face of Hermione Granger in HBO’s Harry Potter Series
Arabella Stanton's life is about to change forever, and the world of Harry Potter is ready to welcome her with open arms.
Charli D’Amelio’s TikTok Takeover: The Teen Star Who Redefined Viral Dance Culture
Charli D'Amelio, TikTok's most-followed female star, redefined viral culture with her iconic Renegade dance. Explore her rise, influence.
Annie Knight’s Viral Fame and the Dark Side of Her OnlyFans Success
Annie Knight's OnlyFans journey showcases immense success and serious challenges, from skyrocketing wealth to health risks and the ethics.
18 seconds ago
<a href="https://tv2online.com/Video/?video" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Video/?video" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?video"><img border="Viral+Leaked+Video" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
|
KaifFz/phi_cpt_model
|
KaifFz
| 2025-06-16T19:11:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T19:10:43Z |
---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KaifFz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
adristunt/ops_abi_mas
|
adristunt
| 2025-06-16T19:10:01Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-16T18:05:36Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
abidne/mt5-support-tickets
|
abidne
| 2025-06-16T19:08:04Z | 8 | 0 | null |
[
"pytorch",
"mt5",
"custom",
"multilingual",
"region:us"
] | null | 2025-06-15T05:50:27Z |
---
language: multilingual
tags:
- mt5
- custom
---
# Mon modèle personnalisé mT5
Ce modèle a été entraîné sur Support Technique
|
amentaphd/example
|
amentaphd
| 2025-06-16T19:00:52Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-m",
"base_model:finetune:Snowflake/snowflake-arctic-embed-m",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-16T19:00:35Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-m
widget:
- source_sentence: What processes are used to separate the raw liquid mix from natural
gas in a gas recycling plant?
sentences:
- '2016 ►M43 (*1) ◄ 21 September 2017 ►M43 (*2) ◄ [▼M28](./../../../legal-content/EN/AUTO/?uri=celex:32014R0895
"32014R0895: INSERTED") 23. Formaldehyde, oligomeric reaction products with aniline
(technical MDA) EC No: 500-036-1 CAS No: 25214-70-4 Carcinogenic (category 1B)
22 February 2016 ►M43 (*1) ◄ 22 August 2017 ►M43 (*2) ◄ — 24. Arsenic acid EC
No: 231-901-9 CAS No: 7778-39-4 Carcinogenic (category 1A) 22 February 2016 22
August 2017 — 25. Bis(2-methoxyethyl) ether (diglyme) EC No: 203-924-4 CAS No:
111-96-6 Toxic for reproduction (category 1B) 22 February 2016 ►M43 (*1) ◄ 22
August 2017 ►M43 (*2) ◄ — 26. 1,2-dichloroethane (EDC) EC No: 203-458-1 CAS No:
107-06-2 Carcinogenic (category 1B) 22 May 2016 22 November 2017 — 27.'
- '1. Member States shall ensure that their competent authorities establish at least
one AI regulatory sandbox at national level, which shall be operational by 2 August
2026. That sandbox may also be established jointly with the competent authorities
of other Member States. The Commission may provide technical support, advice and
tools for the establishment and operation of AI regulatory sandboxes.
The obligation under the first subparagraph may also be fulfilled by participating
in an existing sandbox in so far as that participation provides an equivalent
level of national coverage for the participating Member States.'
- and that boils in a range of approximately 149 °C to 205 °C.) 649-345-00-4 232-489-3
8052-41-3 P Natural gas condensates (petroleum); Low boiling point naphtha — unspecified
(A complex combination of hydrocarbons separated as a liquid from natural gas
in a surface separator by retrograde condensation. It consists mainly of hydrocarbons
having carbon numbers predominantly in the range of C2 to C20. It is a liquid
at atmospheric temperature and pressure.) 649-346-00-X 265-047-3 64741-47-5 P
Natural gas (petroleum), raw liquid mix; Low boiling point naphtha — unspecified
(A complex combination of hydrocarbons separated as a liquid from natural gas
in a gas recycling plant by processes such as refrigeration or absorption. It
consists mainly of
- source_sentence: What should the report on income tax information include as per
Article 48c?
sentences:
- '(d)
seal any business premises and books or records for the period of time of, and
to the extent necessary for, the inspection.
3.
The undertaking or association of undertakings shall submit to inspections ordered
by decision of the Commission. The officials and other accompanying persons authorised
by the Commission to conduct an inspection shall exercise their powers upon production
of a Commission decision:
(a)
specifying the subject matter and purpose of the inspection;
(b)
containing a statement that, pursuant to Article 16, a lack of cooperation allows
the Commission to take a decision on the basis of the facts that are available
to it;
(c)'
- 'By way of derogation from Article 10c, the Member States concerned may only give
transitional free allocation to installations in accordance with that Article
for investments carried out until 31 December 2024. Any allowances available to
the Member States concerned in accordance with Article 10c for the period from
2021 to 2030 that are not used for such investments shall, in the proportion determined
by the respective Member State:
(a)
be added to the total quantity of allowances that the Member State concerned is
to auction pursuant to Article 10(2); or
(b)'
- '7.
Member States shall require subsidiary undertakings or branches not subject to
the provisions of paragraphs 4 and 5 of this Article to publish and make accessible
a report on income tax information where such subsidiary undertakings or branches
serve no other objective than to circumvent the reporting requirements set out
in this Chapter.
Article 48c
Content of the report on income tax information
1.
The report on income tax information required under Article 48b shall include
information relating to all the activities of the standalone undertaking or ultimate
parent undertaking, including those of all affiliated undertakings consolidated
in the financial statements in respect of the relevant financial year.
2.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.7
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333337
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.1
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8892789260714373
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.85
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.85
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision fc74610d18462d218e312aa986ec5c8a75a98152 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'What should the report on income tax information include as per Article 48c?',
'7.\n\nMember States shall require subsidiary undertakings or branches not subject to the provisions of paragraphs 4 and 5 of this Article to publish and make accessible a report on income tax information where such subsidiary undertakings or branches serve no other objective than to circumvent the reporting requirements set out in this Chapter.\n\nArticle 48c\n\nContent of the report on income tax information\n\n1.\n\nThe report on income tax information required under Article 48b shall include information relating to all the activities of the standalone undertaking or ultimate parent undertaking, including those of all affiliated undertakings consolidated in the financial statements in respect of the relevant financial year.\n\n2.',
'(d)\n\nseal any business premises and books or records for the period of time of, and to the extent necessary for, the inspection.\n\n3.\n\nThe undertaking or association of undertakings shall submit to inspections ordered by decision of the Commission. The officials and other accompanying persons authorised by the Commission to conduct an inspection shall exercise their powers upon production of a Commission decision:\n\n(a)\n\nspecifying the subject matter and purpose of the inspection;\n\n(b)\n\ncontaining a statement that, pursuant to Article 16, a lack of cooperation allows the Commission to take a decision on the basis of the facts that are available to it;\n\n(c)',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.7 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.7 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.8893** |
| cosine_mrr@10 | 0.85 |
| cosine_map@100 | 0.85 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10 training samples
* Columns: <code>query_text</code> and <code>doc_text</code>
* Approximate statistics based on the first 10 samples:
| | query_text | doc_text |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 38.6 tokens</li><li>max: 90 tokens</li></ul> | <ul><li>min: 111 tokens</li><li>mean: 231.0 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query_text | doc_text |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are the requirements for Member States regarding the establishment of AI regulatory sandboxes, including the timeline for operational readiness and the possibility of joint establishment with other Member States?</code> | <code>1. Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational by 2 August 2026. That sandbox may also be established jointly with the competent authorities of other Member States. The Commission may provide technical support, advice and tools for the establishment and operation of AI regulatory sandboxes.<br><br>The obligation under the first subparagraph may also be fulfilled by participating in an existing sandbox in so far as that participation provides an equivalent level of national coverage for the participating Member States.</code> |
| <code>Member States must provide updates on their national energy and climate strategies, detailing the anticipated energy savings from 2021 to 2030. They are also obligated to report on the necessary energy savings and the policies intended to achieve these goals. If assessments reveal that a Member State's measures are inadequate to meet energy savings targets, the Commission may issue recommendations for improvement. Additionally, any shortfall in energy savings must be addressed in subsequent obligation periods.</code> | <code>9. Member States shall apply and calculate the effect of the options chosen under paragraph 8 for the period referred to in paragraph 1, first subparagraph, points (a) and (b)(i), separately:<br><br>(a) for the calculation of the amount of energy savings required for the obligation period referred to in paragraph 1, first subparagraph, point (a), Member States may make use of the options listed in paragraph 8, points (a) to (d). All the options chosen under paragraph 8 taken together shall amount to no more than 25 % of the amount of energy savings referred to in paragraph 1, first subparagraph, point (a); (b) for the calculation of the amount of energy savings required for the obligation period referred to in paragraph 1, first subparagraph, point (b)(i), Member States may make use of the options listed in paragraph 8, points (b) to (g), provided that the individual actions referred to in paragraph 8, point (d), continue to have a verifiable and measurable impact after 31 December 2020. All...</code> |
| <code>What is the functional definition of a remote biometric identification system, and how does it operate in terms of identifying individuals without their active participation?</code> | <code>(17) The notion of ‘remote biometric identification system’ referred to in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons without their active involvement, typically at a distance, through the comparison of a person’s biometric data with the biometric data contained in a reference database, irrespectively of the particular technology, processes or types of biometric data used. Such remote biometric identification systems are typically used to perceive multiple persons or their behaviour simultaneously in order to facilitate significantly the identification of natural persons without their active involvement. This excludes AI systems intended to be used for biometric</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| -1 | -1 | 0.8893 |
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 4.0.2
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 0.26.0
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Hafez111/Hjjhhh
|
Hafez111
| 2025-06-16T18:59:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T18:59:31Z |
---
license: apache-2.0
---
|
matheuscs/llama-3.1-8b-identification-5
|
matheuscs
| 2025-06-16T18:46:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T18:45:51Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** matheuscs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shivanshu0701/qwen2-7b-instruct-trl-sft-ChartQA
|
shivanshu0701
| 2025-06-16T18:44:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T17:56:52Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-ChartQA
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shivanshu0701/qwen2-7b-instruct-trl-sft-ChartQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shivanshugupta768-iit-kharagpur/qwen2-7b-instruct-trl-sft-od/runs/1ne3ggcm)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.53.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
VIDEOS-18-yololary-Viral-Video/wATCH.FULL.VIDEO.yololary.Viral.Video.Tutorial.Official
|
VIDEOS-18-yololary-Viral-Video
| 2025-06-16T17:54:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T17:54:37Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
Le-Camarade/perrier_bottle
|
Le-Camarade
| 2025-06-16T17:37:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-16T17:36:55Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: perrier_bottle
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# perrier_bottle
<Gallery />
## Model description
## Trigger words
You should use `perrier_bottle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Le-Camarade/perrier_bottle/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
Tuslay/tuslay-lora1
|
Tuslay
| 2025-06-16T17:36:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-16T17:12:00Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Tuslay1
---
# Tuslay Lora1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Tuslay1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Tuslay1",
"lora_weights": "https://huggingface.co/Tuslay/tuslay-lora1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Tuslay/tuslay-lora1', weight_name='lora.safetensors')
image = pipeline('Tuslay1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Tuslay/tuslay-lora1/discussions) to add images that show off what you’ve made with this LoRA.
|
MJ92/Llama-2-7b-chat-hf_finetuned_50_fr
|
MJ92
| 2025-06-16T17:34:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T17:14:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
espnet/opencpop_svs_train_toksing_300epoch-multi_hl6_wl6_wl23
|
espnet
| 2025-06-16T16:59:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T16:50:47Z |
Model Arch: TokSing
Dataste: Opencpop
You can run this model with espnet/svs2.
```sh
./run.sh --stage 8 \
--kmeans_feature "multi/hubert_large_6+wavlm_large_6+wavlm_large_23" \
--multi_token "hubert_large_ll60k_128_6 wavlm_large_128_6 wavlm_large_128_23" \
--inference_model 300epoch.pth \
--vocoder_file {vocoder_path}
```
---
license: cc-by-4.0
---
|
Maybe0507/0ad41940-d6ff-40fc-b960-2558e97742c0
|
Maybe0507
| 2025-06-16T16:32:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T15:36:46Z |
Temporary Redirect. Redirecting to /rp93/0ad41940-d6ff-40fc-b960-2558e97742c0/resolve/main/README.md
|
falanaja/better-freckles-SD1.5.safetensors
|
falanaja
| 2025-06-16T16:28:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2025-06-16T16:27:08Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/rgthree.compare._temp_lherp_00001_.png
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
instance_prompt: null
---
# better-freckles-SD1.5.safetensors
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/falanaja/better-freckles-SD1.5.safetensors/tree/main) them in the Files & versions tab.
|
OpenVINO/whisper-small-int4-ov
|
OpenVINO
| 2025-06-16T16:27:39Z | 8 | 0 | null |
[
"openvino",
"whisper",
"audio",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"en",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2025-06-03T11:41:11Z |
---
license: apache-2.0
license_link: https://choosealicense.com/licenses/apache-2.0/
language:
- en
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
---
# whisper-small-int4-ov
* Model creator: [OpenAI](https://huggingface.co/openai)
* Original model: [whisper-small](https://huggingface.co/openai/whisper-small)
## Description
This is [whisper-small](https://huggingface.co/openai/whisper-small) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT4_ASYM**
* ratio: **1.0**
* group_size: **128**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.2.0 and higher
* Optimum Intel 1.23.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from datasets import load_dataset
from transformers import AutoProcessor
from optimum.intel.openvino import OVModelForSpeechSeq2Seq
model_id = "OpenVINO/whisper-small-int4-ov"
tokenizer = AutoProcessor.from_pretrained(model_id)
model = OVModelForSpeechSeq2Seq.from_pretrained(model_id)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]
input_features = tokenizer(
sample["audio"]["array"],
sampling_rate=sample["audio"]["sampling_rate"],
return_tensors="pt",
).input_features
outputs = model.generate(input_features)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install huggingface_hub
pip install -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly openvino openvino-tokenizers openvino-genai
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/whisper-small-int4-ov"
model_path = "whisper-small-int4-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
import datasets
device = "CPU"
pipe = ov_genai.WhisperPipeline(model_path, device)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]["audio"]["array"]
print(pipe.generate(sample))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
## Limitations
Check the original model card for [original model card](https://huggingface.co/openai/whisper-small) for limitations.
## Legal information
The original model is distributed under [apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. More details can be found in [original model card](https://huggingface.co/openai/whisper-small).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
OpenVINO/distil-small.en-int4-ov
|
OpenVINO
| 2025-06-16T16:25:55Z | 12 | 0 |
transformers
|
[
"transformers",
"openvino",
"whisper",
"automatic-speech-recognition",
"audio",
"transformers.js",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-04T12:58:53Z |
---
language:
- en
tags:
- audio
- automatic-speech-recognition
- transformers.js
pipeline_tag: automatic-speech-recognition
license: mit
license_link: https://github.com/huggingface/distil-whisper/blob/main/LICENSE
library_name: transformers
---
# distil-small.en-int4-ov
* Model creator: [Distil-whisper](https://huggingface.co/distil-whisper)
* Original model: [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en)
## Description
This is [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT4_ASYM**
* ratio: **1.0**
* group_size: **128**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.2.0 and higher
* Optimum Intel 1.23.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from datasets import load_dataset
from transformers import AutoProcessor
from optimum.intel.openvino import OVModelForSpeechSeq2Seq
model_id = "OpenVINO/distil-small.en-int4-ov"
tokenizer = AutoProcessor.from_pretrained(model_id)
model = OVModelForSpeechSeq2Seq.from_pretrained(model_id)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]
input_features = tokenizer(
sample["audio"]["array"],
sampling_rate=sample["audio"]["sampling_rate"],
return_tensors="pt",
).input_features
outputs = model.generate(input_features)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install huggingface_hub
pip install -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly openvino openvino-tokenizers openvino-genai
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/distil-small.en-int4-ov"
model_path = "distil-small.en-int4-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
import datasets
device = "CPU"
pipe = ov_genai.WhisperPipeline(model_path, device)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]["audio"]["array"]
print(pipe.generate(sample))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
## Limitations
Check the original model card for [original model card](https://huggingface.co/distil-whisper/distil-small.en) for limitations.
## Legal information
The original model is distributed under [mit](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) license. More details can be found in [original model card](https://huggingface.co/distil-whisper/distil-small.en).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
OpenVINO/whisper-tiny-int8-ov
|
OpenVINO
| 2025-06-16T16:10:43Z | 55 | 0 | null |
[
"openvino",
"whisper",
"audio",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2024-11-20T09:52:17Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
pipeline_tag: automatic-speech-recognition
license: apache-2.0
license_link: https://choosealicense.com/licenses/apache-2.0/
---
# whisper-tiny-int8-ov
* Model creator: [OpenAI](https://huggingface.co/openai)
* Original model: [whisper-tiny](https://huggingface.co/openai/whisper-tiny)
## Description
This is [whisper-tiny](https://huggingface.co/openai/whisper-tiny) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT8_ASYM**
* group_size: **128**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.2.0 and higher
* Optimum Intel 1.23.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from datasets import load_dataset
from transformers import AutoProcessor
from optimum.intel.openvino import OVModelForSpeechSeq2Seq
model_id = "OpenVINO/whisper-tiny-int8-ov"
tokenizer = AutoProcessor.from_pretrained(model_id)
model = OVModelForSpeechSeq2Seq.from_pretrained(model_id)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]
input_features = tokenizer(
sample["audio"]["array"],
sampling_rate=sample["audio"]["sampling_rate"],
return_tensors="pt",
).input_features
outputs = model.generate(input_features)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install huggingface_hub
pip install -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly openvino openvino-tokenizers openvino-genai
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/whisper-tiny-int8-ov"
model_path = "whisper-tiny-int8-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
import datasets
device = "CPU"
pipe = ov_genai.WhisperPipeline(model_path, device)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]["audio"]["array"]
print(pipe.generate(sample))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
## Limitations
Check the original model card for [original model card](https://huggingface.co/openai/whisper-tiny) for limitations.
## Legal information
The original model is distributed under [apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. More details can be found in [original model card](https://huggingface.co/openai/whisper-tiny).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
OpenVINO/distil-whisper-large-v2-fp16-ov
|
OpenVINO
| 2025-06-16T16:09:20Z | 57 | 0 |
transformers
|
[
"transformers",
"openvino",
"whisper",
"automatic-speech-recognition",
"audio",
"transformers.js",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-22T13:46:03Z |
---
language:
- en
tags:
- audio
- automatic-speech-recognition
- transformers.js
pipeline_tag: automatic-speech-recognition
license: mit
license_link: https://github.com/huggingface/distil-whisper/blob/main/LICENSE
library_name: transformers
---
# distil-whisper-large-v2-fp16-ov
* Model creator: [Distil-whisper](https://huggingface.co/distil-whisper)
* Original model: [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
## Description
This is [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to FP16.
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.2.0 and higher
* Optimum Intel 1.23.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from datasets import load_dataset
from transformers import AutoProcessor
from optimum.intel.openvino import OVModelForSpeechSeq2Seq
model_id = "OpenVINO/distil-whisper-large-v2-fp16-ov"
tokenizer = AutoProcessor.from_pretrained(model_id)
model = OVModelForSpeechSeq2Seq.from_pretrained(model_id)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]
input_features = tokenizer(
sample["audio"]["array"],
sampling_rate=sample["audio"]["sampling_rate"],
return_tensors="pt",
).input_features
outputs = model.generate(input_features)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install huggingface_hub
pip install -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly openvino openvino-tokenizers openvino-genai
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/distil-whisper-large-v2-fp16-ov"
model_path = "distil-whisper-large-v2-fp16-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
import datasets
device = "CPU"
pipe = ov_genai.WhisperPipeline(model_path, device)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]["audio"]["array"]
print(pipe.generate(sample))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
## Limitations
Check the original model card for [original model card](https://huggingface.co/distil-whisper/distil-large-v2) for limitations.
## Legal information
The original model is distributed under [mit](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) license. More details can be found in [original model card](https://huggingface.co/distil-whisper/distil-large-v2).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
OpenVINO/distil-whisper-large-v2-int4-ov
|
OpenVINO
| 2025-06-16T16:08:36Z | 26 | 0 |
transformers
|
[
"transformers",
"openvino",
"whisper",
"automatic-speech-recognition",
"audio",
"transformers.js",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-23T05:53:08Z |
---
language:
- en
tags:
- audio
- automatic-speech-recognition
- transformers.js
pipeline_tag: automatic-speech-recognition
license: mit
license_link: https://github.com/huggingface/distil-whisper/blob/main/LICENSE
library_name: transformers
---
# distil-whisper-large-v2-int4-ov
* Model creator: [Distil-whisper](https://huggingface.co/distil-whisper)
* Original model: [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
## Description
This is [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT4_ASYM**
* ratio: **1.0**
* group_size: **128**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.2.0 and higher
* Optimum Intel 1.23.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from datasets import load_dataset
from transformers import AutoProcessor
from optimum.intel.openvino import OVModelForSpeechSeq2Seq
model_id = "OpenVINO/distil-whisper-large-v2-int4-ov"
tokenizer = AutoProcessor.from_pretrained(model_id)
model = OVModelForSpeechSeq2Seq.from_pretrained(model_id)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]
input_features = tokenizer(
sample["audio"]["array"],
sampling_rate=sample["audio"]["sampling_rate"],
return_tensors="pt",
).input_features
outputs = model.generate(input_features)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install huggingface_hub
pip install -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly openvino openvino-tokenizers openvino-genai
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/distil-whisper-large-v2-int4-ov"
model_path = "distil-whisper-large-v2-int4-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
import datasets
device = "CPU"
pipe = ov_genai.WhisperPipeline(model_path, device)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]["audio"]["array"]
print(pipe.generate(sample))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
## Limitations
Check the original model card for [original model card](https://huggingface.co/distil-whisper/distil-large-v2) for limitations.
## Legal information
The original model is distributed under [mit](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) license. More details can be found in [original model card](https://huggingface.co/distil-whisper/distil-large-v2).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
multimolecule/mrnafm
|
multimolecule
| 2025-06-16T16:02:28Z | 0 | 0 |
multimolecule
|
[
"multimolecule",
"pytorch",
"safetensors",
"rnafm",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/rnacentral",
"arxiv:2204.00300",
"license:agpl-3.0",
"region:us"
] |
fill-mask
| 2025-06-16T16:01:36Z |
---
language: rna
tags:
- Biology
- RNA
license: agpl-3.0
datasets:
- multimolecule/rnacentral
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- example_title: "Homo sapiens PRNP mRNA for prion"
text: "AGC<mask>CAUUAUGGCGAACCUUGGCUGCUG"
output:
- label: "AAA"
score: 0.05433480441570282
- label: "AUC"
score: 0.04437034949660301
- label: "AAU"
score: 0.03882088139653206
- label: "ACA"
score: 0.037016965448856354
- label: "ACC"
score: 0.03563101962208748
---
# mRNA-FM
Pre-trained model on mRNA CoDing Sequence (CDS) using a masked language modeling (MLM) objective.
## Disclaimer
This is an UNOFFICIAL implementation of the [Interpretable RNA Foundation Model from Unannotated Data for Highly Accurate RNA Structure and Function Predictions](https://doi.org/10.1101/2022.08.06.503062) by Jiayang Chen, Zhihang Hue, Siqi Sun, et al.
The OFFICIAL repository of RNA-FM is at [ml4bio/RNA-FM](https://github.com/ml4bio/RNA-FM).
> [!TIP]
> The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
**The team releasing RNA-FM did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
RNA-FM is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of non-coding RNA sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variants
- **[multimolecule/rnafm](https://huggingface.co/multimolecule/rnafm)**: The RNA-FM model pre-trained on non-coding RNA sequences.
- **[multimolecule/mrnafm](https://huggingface.co/multimolecule/mrnafm)**: The RNA-FM model pre-trained on messenger RNA sequences.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>RNA-FM</td>
<td rowspan="2">12</td>
<td>640</td>
<td rowspan="2">20</td>
<td rowspan="2">5120</td>
<td>99.52</td>
<td>25.68</td>
<td>12.83</td>
<td rowspan="2">1024</td>
</tr>
<tr>
<td>mRNA-FM</td>
<td>1280</td>
<td>239.25</td>
<td>61.43</td>
<td>30.7</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.rnafm](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/rnafm)
- **Data**: [multimolecule/rnacentral](https://huggingface.co/datasets/multimolecule/rnacentral)
- **Paper**: [Interpretable RNA Foundation Model from Unannotated Data for Highly Accurate RNA Structure and Function Predictions](https://doi.org/10.1101/2022.08.06.503062)
- **Developed by**: Jiayang Chen, Zhihang Hu, Siqi Sun, Qingxiong Tan, Yixuan Wang, Qinze Yu, Licheng Zong, Liang Hong, Jin Xiao, Tao Shen, Irwin King, Yu Li
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [ESM](https://huggingface.co/facebook/esm2_t48_15B_UR50D)
- **Original Repository**: [ml4bio/RNA-FM](https://github.com/ml4bio/RNA-FM)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
#### Masked Language Modeling
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline("fill-mask", model="multimolecule/mrnafm")
>>> unmasker("agc<mask>cauuauggcgaaccuuggcugcug")
[{'score': 0.05433480441570282,
'token': 6,
'token_str': 'AAA',
'sequence': 'AGC AAA CAU UAU GGC GAA CCU UGG CUG CUG'},
{'score': 0.04437034949660301,
'token': 22,
'token_str': 'AUC',
'sequence': 'AGC AUC CAU UAU GGC GAA CCU UGG CUG CUG'},
{'score': 0.03882088139653206,
'token': 9,
'token_str': 'AAU',
'sequence': 'AGC AAU CAU UAU GGC GAA CCU UGG CUG CUG'},
{'score': 0.037016965448856354,
'token': 11,
'token_str': 'ACA',
'sequence': 'AGC ACA CAU UAU GGC GAA CCU UGG CUG CUG'},
{'score': 0.03563101962208748,
'token': 12,
'token_str': 'ACC',
'sequence': 'AGC ACC CAU UAU GGC GAA CCU UGG CUG CUG'}]
```
#### RNA Secondary Structure Prediction
You can use this model to predict the secondary structure of an RNA sequence:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> predictor = pipeline("rna-secondary-structure", model="multimolecule/mrnafm")
>>> predictor("agcagucauuauggcgaa")
{'sequence': 'AGC AGU CAU UAU GGC GAA',
'secondary_structure': '((([(]',
'contact_map': [[0.5119704604148865, 0.5045265555381775, 0.494497150182724, 0.4931190013885498, 0.4915284812450409, 0.5020371675491333],
[0.5045265555381775, 0.5034880042076111, 0.5013145804405212, 0.49390116333961487, 0.5006486773490906, 0.49380120635032654],
[0.494497150182724, 0.5013145804405212, 0.5010323524475098, 0.5058367252349854, 0.5021511912345886, 0.49284809827804565],
[0.4931190013885498, 0.49390116333961487, 0.5058367252349854, 0.4988723397254944, 0.5004245042800903, 0.5055262446403503],
[0.4915284812450409, 0.5006486773490906, 0.5021511912345886, 0.5004245042800903, 0.4953134059906006, 0.5076138377189636],
[0.5020371675491333, 0.49380120635032654, 0.49284809827804565, 0.5055262446403503, 0.5076138377189636, 0.4958533048629761]]}
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, RnaFmModel
tokenizer = RnaTokenizer.from_pretrained("multimolecule/mrnafm")
model = RnaFmModel.from_pretrained("multimolecule/mrnafm")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
output = model(**input)
```
#### Sequence Classification / Regression
> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, RnaFmForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/mrnafm")
model = RnaFmForSequencePrediction.from_pretrained("multimolecule/mrnafm")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Token Classification / Regression
> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, RnaFmForTokenPrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/mrnafm")
model = RnaFmForTokenPrediction.from_pretrained("multimolecule/mrnafm")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, RnaFmForContactPrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/mrnafm")
model = RnaFmForContactPrediction.from_pretrained("multimolecule/mrnafm")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
RNA-FM used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
### Training Data
The RNA-FM model was pre-trained on [RNAcentral](https://multimolecule.danling.org/datasets/rnacentral).
RNAcentral is a free, public resource that offers integrated access to a comprehensive and up-to-date set of non-coding RNA sequences provided by a collaborating group of [Expert Databases](https://rnacentral.org/expert-databases) representing a broad range of organisms and RNA types.
RNA-FM applied [CD-HIT (CD-HIT-EST)](https://sites.google.com/view/cd-hit) with a cut-off at 100% sequence identity to remove redundancy from the RNAcentral. The final dataset contains 23.7 million non-redundant RNA sequences.
RNA-FM preprocessed all tokens by replacing "U"s with "T"s.
Note that during model conversions, "T" is replaced with "U". [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
RNA-FM used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
#### Pre-training
The model was trained on 8 NVIDIA A100 GPUs with 80GiB memories.
- Learning rate: 1e-4
- Learning rate scheduler: Inverse square root
- Learning rate warm-up: 10,000 steps
- Weight decay: 0.01
## Citation
**BibTeX**:
```bibtex
@article{chen2022interpretable,
title={Interpretable rna foundation model from unannotated data for highly accurate rna structure and function predictions},
author={Chen, Jiayang and Hu, Zhihang and Sun, Siqi and Tan, Qingxiong and Wang, Yixuan and Yu, Qinze and Zong, Licheng and Hong, Liang and Xiao, Jin and King, Irwin and others},
journal={arXiv preprint arXiv:2204.00300},
year={2022}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [RNA-FM paper](https://doi.org/10.1101/2022.08.06.503062) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
IoanaLiviaPopescu/real-data-synth-data-2400-1-Emil-whisper-small
|
IoanaLiviaPopescu
| 2025-06-16T16:02:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:IoanaLivia/RealVoiceSynthVoice-2400-1-Emil-Neural",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-16T14:14:24Z |
---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- IoanaLivia/RealVoiceSynthVoice-2400-1-Emil-Neural
metrics:
- wer
model-index:
- name: IoanaLiviaPopescu/ IoanaLiviaPopescu/real-data-synth-data-2400-1-Emil-whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IoanaLivia/RealVoiceSynthVoice-2400-1-Emil-Neural
type: IoanaLivia/RealVoiceSynthVoice-2400-1-Emil-Neural
config: default
split: test
args: 'split: validation'
metrics:
- name: Wer
type: wer
value: 14.622902452517057
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IoanaLiviaPopescu/ IoanaLiviaPopescu/real-data-synth-data-2400-1-Emil-whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLivia/RealVoiceSynthVoice-2400-1-Emil-Neural dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3612
- Wer: 14.6229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0 | 0 | 0.6024 | 27.8812 |
| 0.2187 | 1.0 | 88 | 0.3707 | 17.1861 |
| 0.0723 | 2.0 | 176 | 0.3540 | 15.4158 |
| 0.0351 | 3.0 | 264 | 0.3612 | 14.6229 |
| 0.0189 | 4.0 | 352 | 0.3806 | 15.9137 |
| 0.0129 | 5.0 | 440 | 0.3933 | 16.0981 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.15_0.15_0.5_epoch1
|
MinaMila
| 2025-06-16T15:45:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T15:43:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aieng-lab/Llama-3.2-3B_comment-type-pharo
|
aieng-lab
| 2025-06-16T15:38:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"en",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-14T11:52:35Z |
---
library_name: transformers
license: llama3.2
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- meta-llama/Llama-3.2-3B
pipeline_tag: text-classification
---
# Llama 3.2 3b for classifying code comments (multi-label)
This model classifies comments in Pharo code as 'keyimplementationpoints', 'example', 'responsibilities', 'classreferences', 'intent', 'keymessages' or 'collaborators'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** Llama 3.2 Community License Agreement
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/Llama-3.2-3B_issue-type
|
aieng-lab
| 2025-06-16T15:34:02Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"en",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-13T09:42:14Z |
---
library_name: transformers
license: llama3.2
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- meta-llama/Llama-3.2-3B
pipeline_tag: text-classification
---
# Llama 3.2 3b for classifying issues
This model classifies GitHub issues as 'bug', 'enhancement' or 'question'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** Llama 3.2 Community License Agreement
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/CodeLlama-7b-hf_smell-doc
|
aieng-lab
| 2025-06-16T15:27:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"en",
"base_model:meta-llama/CodeLlama-7b-hf",
"base_model:finetune:meta-llama/CodeLlama-7b-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:29:36Z |
---
library_name: transformers
license: llama2
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- meta-llama/CodeLlama-7b-hf
pipeline_tag: text-classification
---
# CodeLlama 7b for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** Llama 2 Community License Agreement
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/CodeLlama-7b-hf_comment-type-java
|
aieng-lab
| 2025-06-16T15:26:26Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"en",
"base_model:meta-llama/CodeLlama-7b-hf",
"base_model:finetune:meta-llama/CodeLlama-7b-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-13T17:52:57Z |
---
library_name: transformers
license: llama2
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- meta-llama/CodeLlama-7b-hf
pipeline_tag: text-classification
---
# CodeLlama 7b for classifying code comments (multi-label)
This model classifies comments in Java code as 'summary', 'ownership', 'expand', 'usage', 'pointer', 'deprecation' or rational'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** Llama 2 Community License Agreement
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
DevQuasar/Delta-Vector.Austral-70B-Preview-GGUF
|
DevQuasar
| 2025-06-16T15:12:33Z | 0 | 0 | null |
[
"text-generation",
"base_model:Delta-Vector/Austral-70B-Preview",
"base_model:finetune:Delta-Vector/Austral-70B-Preview",
"region:us"
] |
text-generation
| 2025-06-16T15:12:33Z |
---
base_model:
- Delta-Vector/Austral-70B-Preview
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [Delta-Vector/Austral-70B-Preview](https://huggingface.co/Delta-Vector/Austral-70B-Preview)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
mradermacher/DRA-GRPO-GGUF
|
mradermacher
| 2025-06-16T14:57:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:SpiceRL/DRA-GRPO",
"base_model:quantized:SpiceRL/DRA-GRPO",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T12:33:12Z |
---
base_model: SpiceRL/DRA-GRPO
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SpiceRL/DRA-GRPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DRA-GRPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DRA-GRPO-GGUF/resolve/main/DRA-GRPO.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
phospho-app/gerotropic-ACT_BBOX-so101_pnp-gk8wt
|
phospho-app
| 2025-06-16T14:51:15Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-16T14:45:44Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
No video directory found with key main, found: [PosixPath('/__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/gerotropic/so101_pnp_bboxes/videos/chunk-000/observation.images.laptop'), PosixPath('/__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/gerotropic/so101_pnp_bboxes/videos/chunk-000/observation.images.phone')]
```
## Training parameters:
- **Dataset**: [gerotropic/so101_pnp](https://huggingface.co/datasets/gerotropic/so101_pnp)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Sharing22/ilu_7
|
Sharing22
| 2025-06-16T14:49:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T14:45:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jim168872/ppo-LunarLander-v3-clip-coef0.45
|
Jim168872
| 2025-06-16T14:33:05Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v3",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-15T07:17:01Z |
---
tags:
- LunarLander-v3
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: -186.10 +/- 106.66
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v3
This is a trained model of a PPO agent playing LunarLander-v3.
# Hyperparameters
```python
{'exp_name': 'lunar_clip_0.45'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v3'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.45
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Jim168872/ppo-LunarLander-v3-clip-coef0.45'
'batch_size': 512
'minibatch_size': 128}
```
|
soumitsr/long-t5-sm-article-digestor
|
soumitsr
| 2025-06-16T14:14:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-15T16:12:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bonnief/afriberta-am-hornmt
|
Bonnief
| 2025-06-16T14:07:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:castorini/afriberta_small",
"base_model:finetune:castorini/afriberta_small",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-16T14:06:16Z |
---
library_name: transformers
base_model: castorini/afriberta_small
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: afriberta-am-hornmt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-am-hornmt
This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5295
- Accuracy: 0.4032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
chenhl19780810/llama3_2-3B-it-thinking-function_calling-V0
|
chenhl19780810
| 2025-06-16T14:04:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T13:49:08Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
model_name: llama3_2-3B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama3_2-3B-it-thinking-function_calling-V0
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chenhl19780810/llama3_2-3B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
IoanaLiviaPopescu/real-data-synth-data-2000-1-Standard-B-whisper-small
|
IoanaLiviaPopescu
| 2025-06-16T14:02:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:IoanaLivia/RealVoiceSynthVoice-2000-1-Standard-B",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-16T12:27:31Z |
---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- IoanaLivia/RealVoiceSynthVoice-2000-1-Standard-B
metrics:
- wer
model-index:
- name: IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-2000-1-Standard-B-whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IoanaLivia/RealVoiceSynthVoice-2000-1-Standard-B
type: IoanaLivia/RealVoiceSynthVoice-2000-1-Standard-B
config: default
split: test
args: 'split: validation'
metrics:
- name: Wer
type: wer
value: 16.90945970864835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-2000-1-Standard-B-whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLivia/RealVoiceSynthVoice-2000-1-Standard-B dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3936
- Wer: 16.9095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0 | 0 | 0.6024 | 27.8812 |
| 0.2265 | 1.0 | 76 | 0.3916 | 18.4031 |
| 0.0783 | 2.0 | 152 | 0.3706 | 17.0201 |
| 0.039 | 3.0 | 228 | 0.3747 | 16.9648 |
| 0.0218 | 4.0 | 304 | 0.3936 | 16.9095 |
| 0.015 | 5.0 | 380 | 0.4077 | 17.2045 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
chen12321345/model
|
chen12321345
| 2025-06-16T13:51:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T13:49:34Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** chen12321345
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Chedjoun/llama3-finetuned-promql
|
Chedjoun
| 2025-06-16T13:20:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T13:20:10Z |
---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Chedjoun
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ronald888/fused-mistral-7b-v0.3-4bit
|
ronald888
| 2025-06-16T13:14:23Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gguf",
"mistral",
"text-generation",
"conversational",
"base_model:mlx-community/Mistral-7B-Instruct-v0.3-4bit",
"base_model:quantized:mlx-community/Mistral-7B-Instruct-v0.3-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T01:56:51Z |
---
license: apache-2.0
tags:
- mlx
library_name: mlx
pipeline_tag: text-generation
base_model: mlx-community/Mistral-7B-Instruct-v0.3-4bit
---
# ronald888/fused-mistral-7b-v0.3-4bit
This model [ronald888/fused-mistral-7b-v0.3-4bit](https://huggingface.co/ronald888/fused-mistral-7b-v0.3-4bit) was
converted to MLX format from [mlx-community/Mistral-7B-Instruct-v0.3-4bit](https://huggingface.co/mlx-community/Mistral-7B-Instruct-v0.3-4bit)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("ronald888/fused-mistral-7b-v0.3-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.05_0.5_0.25_epoch1
|
MinaMila
| 2025-06-16T13:04:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T13:02:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SMARTICT/gemma-3-4b-it-tr-ft
|
SMARTICT
| 2025-06-16T12:54:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T12:54:19Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SMARTICT
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Gio88/bert-finetuned-ner
|
Gio88
| 2025-06-16T12:47:45Z | 0 | 1 | null |
[
"safetensors",
"bert",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-06-16T10:26:04Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9345159151193634
- name: Recall
type: recall
value: 0.9486704813194211
- name: F1
type: f1
value: 0.9415400033405713
- name: Accuracy
type: accuracy
value: 0.986489668570083
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0619
- Precision: 0.9345
- Recall: 0.9487
- F1: 0.9415
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0759 | 1.0 | 1756 | 0.0636 | 0.9103 | 0.9360 | 0.9230 | 0.9826 |
| 0.036 | 2.0 | 3512 | 0.0609 | 0.9348 | 0.9482 | 0.9414 | 0.9858 |
| 0.0201 | 3.0 | 5268 | 0.0619 | 0.9345 | 0.9487 | 0.9415 | 0.9865 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.7.1+cpu
- Datasets 3.6.0
- Tokenizers 0.19.1
|
Dure-Fishan-Official-Viral-Videos/New.tutorial.Dure.Fishan.Viral.Video.Leaks.Official
|
Dure-Fishan-Official-Viral-Videos
| 2025-06-16T12:24:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T12:24:27Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
aieng-lab/gpt2-medium_story-points
|
aieng-lab
| 2025-06-16T12:19:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-classification",
"en",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T12:19:35Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- gpt2-medium
pipeline_tag: text-classification
---
# GPT-2 medium for estimating story points
This model estimates story points as a numerical value.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [gpt2-medium](https://huggingface.co/gpt2-medium)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
dllmpg/ppo-0.2
|
dllmpg
| 2025-06-16T12:14:50Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-16T11:55:44Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -221.45 +/- 187.38
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'notebook'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'dllmpg/ppo-0.2'
'batch_size': 512
'minibatch_size': 128}
```
|
18-VIDEOS-Shubham-gupta-viral-Video-link/New.tutorial.Shubham.gupta.Viral.Video.Leaks.Official
|
18-VIDEOS-Shubham-gupta-viral-Video-link
| 2025-06-16T12:06:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T12:06:27Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
HusnainM7/Phi3instruct
|
HusnainM7
| 2025-06-16T11:58:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2025-06-16T11:58:23Z |
---
base_model: microsoft/phi-2
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.5_0.25_epoch1
|
MinaMila
| 2025-06-16T11:30:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:29:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
donvitomd/donvitox
|
donvitomd
| 2025-06-16T11:21:43Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-16T10:04:01Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
LaaP-ai/donut-base-invoice-v1.04
|
LaaP-ai
| 2025-06-16T11:00:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-16T10:59:51Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.04
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.25_0.15_epoch1
|
MinaMila
| 2025-06-16T10:59:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T10:57:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
furkankarakuz/test-code-search-net-tokenizer
|
furkankarakuz
| 2025-06-16T10:50:49Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:50:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.75_0.05_epoch1
|
MinaMila
| 2025-06-16T10:43:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T10:41:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
justdone-org/flan-t5-base-chunks_1_4-7550000
|
justdone-org
| 2025-06-16T10:41:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-16T10:41:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF
|
Theros
| 2025-06-16T10:33:19Z | 0 | 0 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Theros/Qwen2.5-ColdBrew-R1",
"llama-cpp",
"gguf-my-repo",
"base_model:SvalTek/Q2.5-ColdBrew-R1-Forge",
"base_model:quantized:SvalTek/Q2.5-ColdBrew-R1-Forge",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T10:32:53Z |
---
base_model: SvalTek/Q2.5-ColdBrew-R1-Forge
tags:
- merge
- mergekit
- lazymergekit
- Theros/Qwen2.5-ColdBrew-R1
- llama-cpp
- gguf-my-repo
---
# Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF
This model was converted to GGUF format from [`SvalTek/Q2.5-ColdBrew-R1-Forge`](https://huggingface.co/SvalTek/Q2.5-ColdBrew-R1-Forge) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SvalTek/Q2.5-ColdBrew-R1-Forge) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -c 2048
```
|
Quit2003/MateQwen2.5-7b
|
Quit2003
| 2025-06-16T10:24:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:24:01Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Quit2003
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
meto/welfare
|
meto
| 2025-06-16T10:16:10Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:14:28Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** meto
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Darkhn/L3.3-70B-Animus-V1-GGUF
|
Darkhn
| 2025-06-16T10:15:52Z | 227 | 0 |
llama.cpp
|
[
"llama.cpp",
"gguf",
"base_model:Darkhn/L3.3-70B-Animus-V1",
"base_model:quantized:Darkhn/L3.3-70B-Animus-V1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-13T01:52:45Z |
---
library_name: llama.cpp
license: mit
tags:
- gguf
base_model:
- Darkhn/L3.3-70B-Animus-V1
---
# L3.3-70B-Animus-V1-GGUF
GGUF model files for `L3.3-70B-Animus-V1` (original base: `L3.3-70B-Animus-V1`).
This repository contains the following quantization: **Q5_K_M**.
## Files
- `L3.3-70B-Animus-V1-Q5_K_M.gguf`
Converted and quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
electroglyph/Qwen3-Embedding-0.6B-onnx-int4
|
electroglyph
| 2025-06-16T10:14:40Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"onnx",
"qwen3",
"text-generation",
"transformers",
"sentence-similarity",
"feature-extraction",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:quantized:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-16T08:54:12Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B-Base
tags:
- transformers
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# Qwen3-Embedding-0.6B-onnx-int4
This is an onnx version of https://huggingface.co/Qwen/Qwen3-Embedding-0.6B
This model has been dynamically quantized to int4/uint8, and further modified to output a uint8 1024 dim tensor.
You probably don't want to use this model on CPU. I've tested on a Ryzen CPU with VNNI, and it's the same speed as the base f32 model, but with 2% less retrieval accuracy. I'm posting it here in case it's useful for GPU users. Not sure if it actually is, but I already made it so here it is.
This model is compatible with qdrant fastembed, please note these details:
- Execute model without pooling and without normalization
- Pay attention to the example query format in the code below
# Quantization method
I did an int4 quantization pass with block size == 128 (block size 32 was extremely close in accuracy), with the same nodes excluded as from my uint8 model.
Then I quantized the remaining non-excluded nodes to uint8 the same way as here: https://huggingface.co/electroglyph/Qwen3-Embedding-0.6B-onnx-uint8
<details>
<summary>Here are the nodes I excluded</summary>
```python
["/0/auto_model/ConstantOfShape",
"/0/auto_model/Constant_28",
"/0/auto_model/layers.25/post_attention_layernorm/Pow",
"/0/auto_model/layers.26/input_layernorm/Pow",
"/0/auto_model/layers.25/input_layernorm/Pow",
"/0/auto_model/layers.24/post_attention_layernorm/Pow",
"/0/auto_model/layers.24/input_layernorm/Pow",
"/0/auto_model/layers.23/post_attention_layernorm/Pow",
"/0/auto_model/layers.23/input_layernorm/Pow",
"/0/auto_model/layers.22/post_attention_layernorm/Pow",
"/0/auto_model/layers.22/input_layernorm/Pow",
"/0/auto_model/layers.3/input_layernorm/Pow",
"/0/auto_model/layers.4/input_layernorm/Pow",
"/0/auto_model/layers.3/post_attention_layernorm/Pow",
"/0/auto_model/layers.21/post_attention_layernorm/Pow",
"/0/auto_model/layers.5/input_layernorm/Pow",
"/0/auto_model/layers.4/post_attention_layernorm/Pow",
"/0/auto_model/layers.5/post_attention_layernorm/Pow",
"/0/auto_model/layers.6/input_layernorm/Pow",
"/0/auto_model/layers.6/post_attention_layernorm/Pow",
"/0/auto_model/layers.7/input_layernorm/Pow",
"/0/auto_model/layers.8/input_layernorm/Pow",
"/0/auto_model/layers.7/post_attention_layernorm/Pow",
"/0/auto_model/layers.26/post_attention_layernorm/Pow",
"/0/auto_model/layers.9/input_layernorm/Pow",
"/0/auto_model/layers.8/post_attention_layernorm/Pow",
"/0/auto_model/layers.21/input_layernorm/Pow",
"/0/auto_model/layers.20/post_attention_layernorm/Pow",
"/0/auto_model/layers.9/post_attention_layernorm/Pow",
"/0/auto_model/layers.10/input_layernorm/Pow",
"/0/auto_model/layers.20/input_layernorm/Pow",
"/0/auto_model/layers.11/input_layernorm/Pow",
"/0/auto_model/layers.10/post_attention_layernorm/Pow",
"/0/auto_model/layers.12/input_layernorm/Pow",
"/0/auto_model/layers.11/post_attention_layernorm/Pow",
"/0/auto_model/layers.12/post_attention_layernorm/Pow",
"/0/auto_model/layers.13/input_layernorm/Pow",
"/0/auto_model/layers.19/post_attention_layernorm/Pow",
"/0/auto_model/layers.13/post_attention_layernorm/Pow",
"/0/auto_model/layers.14/input_layernorm/Pow",
"/0/auto_model/layers.19/input_layernorm/Pow",
"/0/auto_model/layers.18/post_attention_layernorm/Pow",
"/0/auto_model/layers.14/post_attention_layernorm/Pow",
"/0/auto_model/layers.15/input_layernorm/Pow",
"/0/auto_model/layers.16/input_layernorm/Pow",
"/0/auto_model/layers.15/post_attention_layernorm/Pow",
"/0/auto_model/layers.18/input_layernorm/Pow",
"/0/auto_model/layers.17/post_attention_layernorm/Pow",
"/0/auto_model/layers.17/input_layernorm/Pow",
"/0/auto_model/layers.16/post_attention_layernorm/Pow",
"/0/auto_model/layers.27/post_attention_layernorm/Pow",
"/0/auto_model/layers.27/input_layernorm/Pow",
"/0/auto_model/norm/Pow",
"/0/auto_model/layers.25/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.25/post_attention_layernorm/Add",
"/0/auto_model/layers.26/input_layernorm/Add",
"/0/auto_model/layers.26/input_layernorm/ReduceMean",
"/0/auto_model/layers.25/input_layernorm/ReduceMean",
"/0/auto_model/layers.25/input_layernorm/Add",
"/0/auto_model/layers.24/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.24/post_attention_layernorm/Add",
"/0/auto_model/layers.24/input_layernorm/Add",
"/0/auto_model/layers.24/input_layernorm/ReduceMean",
"/0/auto_model/layers.23/post_attention_layernorm/Add",
"/0/auto_model/layers.23/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.23/input_layernorm/ReduceMean",
"/0/auto_model/layers.23/input_layernorm/Add",
"/0/auto_model/layers.22/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.22/post_attention_layernorm/Add",
"/0/auto_model/layers.26/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.26/post_attention_layernorm/Add",
"/0/auto_model/layers.22/input_layernorm/ReduceMean",
"/0/auto_model/layers.22/input_layernorm/Add",
"/0/auto_model/layers.3/input_layernorm/Add",
"/0/auto_model/layers.3/input_layernorm/ReduceMean",
"/0/auto_model/layers.21/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.21/post_attention_layernorm/Add",
"/0/auto_model/layers.4/input_layernorm/Add",
"/0/auto_model/layers.4/input_layernorm/ReduceMean",
"/0/auto_model/layers.3/post_attention_layernorm/Add",
"/0/auto_model/layers.3/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.5/input_layernorm/Add",
"/0/auto_model/layers.5/input_layernorm/ReduceMean",
"/0/auto_model/layers.4/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.4/post_attention_layernorm/Add",
"/0/auto_model/layers.5/post_attention_layernorm/Add",
"/0/auto_model/layers.5/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.6/input_layernorm/Add",
"/0/auto_model/layers.6/input_layernorm/ReduceMean",
"/0/auto_model/layers.6/post_attention_layernorm/Add",
"/0/auto_model/layers.6/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.7/input_layernorm/Add",
"/0/auto_model/layers.7/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/input_layernorm/Add",
"/0/auto_model/layers.7/post_attention_layernorm/Add",
"/0/auto_model/layers.7/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/input_layernorm/Add",
"/0/auto_model/layers.9/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/post_attention_layernorm/Add",
"/0/auto_model/layers.8/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.21/input_layernorm/Add",
"/0/auto_model/layers.21/input_layernorm/ReduceMean",
"/0/auto_model/layers.20/post_attention_layernorm/Add",
"/0/auto_model/layers.20/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/post_attention_layernorm/Add",
"/0/auto_model/layers.10/input_layernorm/ReduceMean",
"/0/auto_model/layers.10/input_layernorm/Add",
"/0/auto_model/layers.20/input_layernorm/Add",
"/0/auto_model/layers.20/input_layernorm/ReduceMean",
"/0/auto_model/layers.11/input_layernorm/ReduceMean",
"/0/auto_model/layers.11/input_layernorm/Add",
"/0/auto_model/layers.10/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.10/post_attention_layernorm/Add",
"/0/auto_model/layers.12/input_layernorm/ReduceMean",
"/0/auto_model/layers.12/input_layernorm/Add",
"/0/auto_model/layers.11/post_attention_layernorm/Add",
"/0/auto_model/layers.11/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.12/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.12/post_attention_layernorm/Add",
"/0/auto_model/layers.13/input_layernorm/Add",
"/0/auto_model/layers.13/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/post_attention_layernorm/Add",
"/0/auto_model/layers.19/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.13/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.13/post_attention_layernorm/Add",
"/0/auto_model/layers.14/input_layernorm/Add",
"/0/auto_model/layers.14/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/input_layernorm/Add",
"/0/auto_model/layers.18/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.18/post_attention_layernorm/Add",
"/0/auto_model/layers.14/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.14/post_attention_layernorm/Add",
"/0/auto_model/layers.15/input_layernorm/ReduceMean",
"/0/auto_model/layers.15/input_layernorm/Add",
"/0/auto_model/layers.16/input_layernorm/Add",
"/0/auto_model/layers.16/input_layernorm/ReduceMean",
"/0/auto_model/layers.15/post_attention_layernorm/Add",
"/0/auto_model/layers.15/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.18/input_layernorm/Add",
"/0/auto_model/layers.18/input_layernorm/ReduceMean",
"/0/auto_model/layers.17/post_attention_layernorm/Add",
"/0/auto_model/layers.17/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.17/input_layernorm/ReduceMean",
"/0/auto_model/layers.17/input_layernorm/Add",
"/0/auto_model/layers.16/post_attention_layernorm/Add",
"/0/auto_model/layers.16/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.27/post_attention_layernorm/Add",
"/0/auto_model/layers.27/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.27/input_layernorm/Add",
"/0/auto_model/layers.27/input_layernorm/ReduceMean",
"/0/auto_model/layers.27/self_attn/q_norm/Pow",
"/0/auto_model/layers.14/self_attn/k_norm/Pow",
"/0/auto_model/layers.26/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/self_attn/k_norm/Pow",
"/0/auto_model/layers.8/self_attn/k_norm/Pow",
"/0/auto_model/layers.24/self_attn/k_norm/Pow",
"/0/auto_model/layers.24/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/self_attn/k_norm/Pow",
"/0/auto_model/layers.23/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/self_attn/k_norm/Pow",
"/0/auto_model/layers.12/self_attn/k_norm/Pow",
"/0/auto_model/layers.13/self_attn/k_norm/Pow",
"/0/auto_model/layers.2/mlp/down_proj/MatMul",
"/0/auto_model/layers.3/post_attention_layernorm/Cast",
"/0/auto_model/layers.3/Add",
"/0/auto_model/layers.3/Add_1",
"/0/auto_model/layers.4/input_layernorm/Cast",
"/0/auto_model/layers.3/input_layernorm/Cast",
"/0/auto_model/layers.2/Add_1",
"/0/auto_model/layers.4/Add",
"/0/auto_model/layers.4/post_attention_layernorm/Cast",
"/0/auto_model/layers.5/input_layernorm/Cast",
"/0/auto_model/layers.4/Add_1",
"/0/auto_model/layers.5/post_attention_layernorm/Cast",
"/0/auto_model/layers.5/Add",
"/0/auto_model/layers.5/Add_1",
"/0/auto_model/layers.6/input_layernorm/Cast",
"/0/auto_model/layers.7/Add_1",
"/0/auto_model/layers.8/input_layernorm/Cast",
"/0/auto_model/layers.7/Add",
"/0/auto_model/layers.7/post_attention_layernorm/Cast",
"/0/auto_model/layers.6/Add",
"/0/auto_model/layers.6/post_attention_layernorm/Cast",
"/0/auto_model/layers.6/Add_1",
"/0/auto_model/layers.7/input_layernorm/Cast",
"/0/auto_model/layers.8/Add",
"/0/auto_model/layers.8/post_attention_layernorm/Cast",
"/0/auto_model/layers.9/input_layernorm/Cast",
"/0/auto_model/layers.8/Add_1",
"/0/auto_model/layers.9/post_attention_layernorm/Cast",
"/0/auto_model/layers.9/Add",
"/0/auto_model/layers.9/Add_1",
"/0/auto_model/layers.10/input_layernorm/Cast",
"/0/auto_model/layers.11/input_layernorm/Cast",
"/0/auto_model/layers.10/Add_1",
"/0/auto_model/layers.10/Add",
"/0/auto_model/layers.10/post_attention_layernorm/Cast",
"/0/auto_model/layers.11/Add",
"/0/auto_model/layers.11/post_attention_layernorm/Cast",
"/0/auto_model/layers.11/Add_1",
"/0/auto_model/layers.12/input_layernorm/Cast",
"/0/auto_model/layers.12/Add",
"/0/auto_model/layers.12/post_attention_layernorm/Cast",
"/0/auto_model/layers.12/Add_1",
"/0/auto_model/layers.13/input_layernorm/Cast",
"/0/auto_model/layers.13/Add",
"/0/auto_model/layers.13/post_attention_layernorm/Cast",
"/0/auto_model/layers.14/input_layernorm/Cast",
"/0/auto_model/layers.13/Add_1",
"/0/auto_model/layers.14/Add_1",
"/0/auto_model/layers.15/input_layernorm/Cast",
"/0/auto_model/layers.14/post_attention_layernorm/Cast",
"/0/auto_model/layers.14/Add",
"/0/auto_model/layers.15/post_attention_layernorm/Cast",
"/0/auto_model/layers.15/Add_1",
"/0/auto_model/layers.16/input_layernorm/Cast",
"/0/auto_model/layers.15/Add",
"/0/auto_model/layers.17/input_layernorm/Cast",
"/0/auto_model/layers.16/Add_1",
"/0/auto_model/layers.16/Add",
"/0/auto_model/layers.16/post_attention_layernorm/Cast",
"/0/auto_model/layers.19/input_layernorm/Cast",
"/0/auto_model/layers.18/Add_1",
"/0/auto_model/layers.18/input_layernorm/Cast",
"/0/auto_model/layers.17/Add_1",
"/0/auto_model/layers.17/Add",
"/0/auto_model/layers.17/post_attention_layernorm/Cast",
"/0/auto_model/layers.18/post_attention_layernorm/Cast",
"/0/auto_model/layers.18/Add",
"/0/auto_model/layers.19/Add",
"/0/auto_model/layers.19/post_attention_layernorm/Cast",
"/0/auto_model/layers.22/Add_1",
"/0/auto_model/layers.23/input_layernorm/Cast",
"/0/auto_model/layers.20/Add_1",
"/0/auto_model/layers.21/input_layernorm/Cast",
"/0/auto_model/layers.21/Add_1",
"/0/auto_model/layers.22/input_layernorm/Cast",
"/0/auto_model/layers.19/Add_1",
"/0/auto_model/layers.20/input_layernorm/Cast",
"/0/auto_model/layers.24/input_layernorm/Cast",
"/0/auto_model/layers.23/Add_1",
"/0/auto_model/layers.22/Add",
"/0/auto_model/layers.22/post_attention_layernorm/Cast",
"/0/auto_model/layers.21/Add",
"/0/auto_model/layers.21/post_attention_layernorm/Cast",
"/0/auto_model/layers.20/Add",
"/0/auto_model/layers.20/post_attention_layernorm/Cast",
"/0/auto_model/layers.23/post_attention_layernorm/Cast",
"/0/auto_model/layers.23/Add",
"/0/auto_model/layers.25/input_layernorm/Cast",
"/0/auto_model/layers.24/Add_1",
"/0/auto_model/layers.24/post_attention_layernorm/Cast",
"/0/auto_model/layers.24/Add",
"/0/auto_model/layers.25/Add",
"/0/auto_model/layers.25/post_attention_layernorm/Cast",
"/0/auto_model/layers.25/Add_1",
"/0/auto_model/layers.26/input_layernorm/Cast",
"/0/auto_model/layers.26/Add",
"/0/auto_model/layers.26/post_attention_layernorm/Cast",
"/0/auto_model/layers.21/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/Add_1",
"/0/auto_model/layers.27/input_layernorm/Cast",
"/0/auto_model/layers.27/Add",
"/0/auto_model/layers.27/post_attention_layernorm/Cast",
"/0/auto_model/norm/Add",
"/0/auto_model/norm/ReduceMean",
"/0/auto_model/layers.23/self_attn/k_norm/Pow",
"/0/auto_model/layers.21/self_attn/k_norm/Pow",
"/0/auto_model/layers.22/self_attn/k_norm/Pow",
"/0/auto_model/layers.10/self_attn/k_norm/Pow",
"/0/auto_model/layers.19/self_attn/q_norm/Pow",
"/0/auto_model/layers.2/mlp/Mul",
"/0/auto_model/layers.22/self_attn/q_norm/Pow",
"/0/auto_model/layers.11/self_attn/k_norm/Pow",
"/0/auto_model/layers.20/self_attn/q_norm/Pow",
"/0/auto_model/layers.20/self_attn/k_norm/Pow",
"/0/auto_model/layers.18/self_attn/q_norm/Pow",
"/0/auto_model/layers.17/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/mlp/down_proj/MatMul",
"/0/auto_model/layers.19/self_attn/k_norm/Pow",
"/0/auto_model/layers.27/Add_1",
"/0/auto_model/norm/Cast",
"/0/auto_model/layers.16/self_attn/k_norm/Pow",
"/0/auto_model/layers.18/self_attn/k_norm/Pow",
"/0/auto_model/layers.11/self_attn/q_norm/Pow",
"/0/auto_model/layers.9/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/self_attn/q_norm/Add",
"/0/auto_model/layers.26/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.14/self_attn/k_norm/Add",
"/0/auto_model/layers.14/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.16/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/mlp/Mul",
"/0/auto_model/layers.27/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.27/self_attn/q_norm/Add",
"/0/auto_model/layers.9/self_attn/k_norm/Pow",
"/0/auto_model/layers.17/self_attn/k_norm/Pow",
"/0/auto_model/layers.26/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.26/self_attn/k_norm/Add",
"/0/auto_model/layers.25/self_attn/k_norm/Add",
"/0/auto_model/layers.25/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.13/self_attn/k_norm/Add",
"/0/auto_model/layers.13/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.10/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/input_layernorm/Mul_1",
"/0/auto_model/layers.27/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.27/self_attn/k_norm/Add",
"/0/auto_model/layers.26/input_layernorm/Mul_1",
"/0/auto_model/layers.15/self_attn/q_norm/Pow",
"/0/auto_model/layers.12/self_attn/k_norm/Add",
"/0/auto_model/layers.12/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.25/self_attn/q_norm/Add",
"/0/auto_model/layers.25/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.24/input_layernorm/Mul_1",
"/0/auto_model/layers.12/self_attn/q_norm/Pow",
"/0/auto_model/layers.24/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.24/self_attn/q_norm/Add",
"/0/auto_model/layers.24/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.24/self_attn/k_norm/Add",
"/0/auto_model/layers.22/mlp/Mul",
"/0/auto_model/layers.2/post_attention_layernorm/Pow",
"/0/auto_model/layers.23/mlp/Mul",
"/0/auto_model/layers.24/mlp/Mul",
"/0/auto_model/layers.23/input_layernorm/Mul_1",
"/0/auto_model/layers.14/self_attn/q_norm/Pow",
"/0/auto_model/layers.14/self_attn/k_proj/MatMul",
"/0/auto_model/layers.14/self_attn/k_norm/Cast",
"/0/auto_model/layers.14/self_attn/Reshape_1",
"/0/auto_model/layers.21/mlp/Mul",
"/0/auto_model/layers.3/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.3/input_layernorm/Sqrt",
"/0/auto_model/layers.4/input_layernorm/Sqrt",
"/0/auto_model/layers.5/input_layernorm/Sqrt",
"/0/auto_model/layers.4/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.5/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.6/input_layernorm/Sqrt",
"/0/auto_model/layers.6/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.8/input_layernorm/Sqrt",
"/0/auto_model/layers.8/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.7/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.7/input_layernorm/Sqrt",
"/0/auto_model/layers.9/input_layernorm/Sqrt",
"/0/auto_model/layers.10/input_layernorm/Sqrt",
"/0/auto_model/layers.9/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.11/input_layernorm/Sqrt",
"/0/auto_model/layers.10/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.12/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.11/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.12/input_layernorm/Sqrt",
"/0/auto_model/layers.13/input_layernorm/Sqrt",
"/0/auto_model/layers.14/input_layernorm/Sqrt",
"/0/auto_model/layers.13/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.15/input_layernorm/Sqrt",
"/0/auto_model/layers.14/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.16/input_layernorm/Sqrt",
"/0/auto_model/layers.15/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.17/input_layernorm/Sqrt",
"/0/auto_model/layers.16/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.19/input_layernorm/Sqrt",
"/0/auto_model/layers.17/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.18/input_layernorm/Sqrt",
"/0/auto_model/layers.18/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.19/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.23/input_layernorm/Sqrt",
"/0/auto_model/layers.20/input_layernorm/Sqrt",
"/0/auto_model/layers.21/input_layernorm/Sqrt",
"/0/auto_model/layers.22/input_layernorm/Sqrt",
"/0/auto_model/layers.22/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.24/input_layernorm/Sqrt",
"/0/auto_model/layers.20/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.21/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.23/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.25/input_layernorm/Sqrt",
"/0/auto_model/layers.24/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.25/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.26/input_layernorm/Sqrt",
"/0/auto_model/layers.26/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.15/self_attn/k_norm/Pow",
"/0/auto_model/layers.27/input_layernorm/Sqrt",
"/0/auto_model/layers.27/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.2/input_layernorm/Pow",
"/0/auto_model/layers.26/mlp/Mul",
"/0/auto_model/layers.23/self_attn/q_norm/Add",
"/0/auto_model/layers.23/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.13/self_attn/q_norm/Pow",
"/0/auto_model/layers.21/self_attn/q_norm/Add",
"/0/auto_model/layers.21/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.6/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/self_attn/Reshape_7",
"/0/auto_model/layers.27/self_attn/MatMul_1",
"/0/auto_model/layers.27/self_attn/Transpose_4",
"/0/auto_model/layers.26/self_attn/Expand_1",
"/0/auto_model/layers.26/self_attn/Unsqueeze_19",
"/0/auto_model/layers.26/self_attn/v_proj/MatMul",
"/0/auto_model/layers.26/self_attn/Transpose_2",
"/0/auto_model/layers.26/self_attn/Reshape_6",
"/0/auto_model/layers.26/self_attn/Reshape_2",
"/0/auto_model/layers.11/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.11/self_attn/k_norm/Add",
"/0/auto_model/layers.22/input_layernorm/Mul_1",
"/0/auto_model/layers.25/mlp/Mul",
"/0/auto_model/layers.8/self_attn/k_norm/Cast",
"/0/auto_model/layers.8/self_attn/k_proj/MatMul",
"/0/auto_model/layers.8/self_attn/Reshape_1",
"/0/auto_model/layers.21/input_layernorm/Mul_1",
"/0/auto_model/layers.5/self_attn/q_norm/Pow",
"/0/auto_model/layers.22/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.22/self_attn/q_norm/Add",
"/0/auto_model/layers.22/mlp/down_proj/MatMul",
"/0/auto_model/layers.23/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.23/self_attn/k_norm/Add",
"/0/auto_model/layers.23/mlp/down_proj/MatMul",
"/0/auto_model/layers.26/mlp/down_proj/MatMul",
"/0/auto_model/layers.1/self_attn/Add_2",
"/0/auto_model/layers.2/self_attn/Add_2",
"/0/auto_model/layers.6/self_attn/Add_2",
"/0/auto_model/layers.11/self_attn/Add_2",
"/0/auto_model/layers.12/self_attn/Add_2",
"/0/auto_model/layers.16/self_attn/Add_2",
"/0/auto_model/layers.21/self_attn/Add_2",
"/0/auto_model/layers.24/self_attn/Add_2",
"/0/auto_model/layers.0/self_attn/Add_2",
"/0/auto_model/layers.8/self_attn/Add_2",
"/0/auto_model/layers.13/self_attn/Add_2",
"/0/auto_model/layers.26/self_attn/Add_2",
"/0/auto_model/layers.3/self_attn/Add_2",
"/0/auto_model/layers.15/self_attn/Add_2",
"/0/auto_model/layers.25/self_attn/Add_2",
"/0/auto_model/layers.4/self_attn/Add_2",
"/0/auto_model/layers.14/self_attn/Add_2",
"/0/auto_model/layers.22/self_attn/Add_2",
"/0/auto_model/layers.9/self_attn/Add_2",
"/0/auto_model/layers.23/self_attn/Add_2",
"/0/auto_model/layers.10/self_attn/Add_2",
"/0/auto_model/layers.5/self_attn/Add_2",
"/0/auto_model/layers.19/self_attn/Add_2",
"/0/auto_model/layers.7/self_attn/Add_2",
"/0/auto_model/layers.27/self_attn/Add_2",
"/0/auto_model/layers.18/self_attn/Add_2",
"/0/auto_model/layers.20/self_attn/Add_2",
"/0/auto_model/layers.17/self_attn/Add_2",
"/0/auto_model/Slice_1",
"/0/auto_model/layers.5/self_attn/Slice_4",
"/0/auto_model/layers.12/self_attn/Slice_4",
"/0/auto_model/layers.18/self_attn/Slice_4",
"/0/auto_model/layers.3/self_attn/Slice_4",
"/0/auto_model/layers.11/self_attn/Slice_4",
"/0/auto_model/layers.22/self_attn/Slice_4",
"/0/auto_model/Expand",
"/0/auto_model/layers.4/self_attn/Slice_4",
"/0/auto_model/Slice_2",
"/0/auto_model/layers.8/self_attn/Slice_4",
"/0/auto_model/layers.2/self_attn/Slice_4",
"/0/auto_model/layers.15/self_attn/Slice_4",
"/0/auto_model/layers.26/self_attn/Slice_4",
"/0/auto_model/layers.24/self_attn/Slice_4",
"/0/auto_model/Expand_1",
"/0/auto_model/layers.14/self_attn/Slice_4",
"/0/auto_model/layers.21/self_attn/Slice_4",
"/0/auto_model/layers.1/self_attn/Slice_4",
"/0/auto_model/Reshape_2",
"/0/auto_model/layers.19/self_attn/Slice_4",
"/0/auto_model/Slice",
"/0/auto_model/layers.6/self_attn/Slice_4",
"/0/auto_model/layers.0/self_attn/Slice_4",
"/0/auto_model/layers.25/self_attn/Slice_4",
"/0/auto_model/Unsqueeze_4",
"/0/auto_model/layers.10/self_attn/Slice_4",
"/0/auto_model/layers.23/self_attn/Slice_4",
"/0/auto_model/layers.17/self_attn/Slice_4",
"/0/auto_model/Where_1",
"/0/auto_model/layers.27/self_attn/Slice_4",
"/0/auto_model/layers.20/self_attn/Slice_4",
"/0/auto_model/Add",
"/0/auto_model/Mul",
"/0/auto_model/layers.7/self_attn/Slice_4",
"/0/auto_model/layers.13/self_attn/Slice_4",
"/0/auto_model/layers.9/self_attn/Slice_4",
"/0/auto_model/layers.16/self_attn/Slice_4",
"/0/auto_model/Unsqueeze_3",
"/0/auto_model/ScatterND"]
```
</details>
# Benchmarks
## Speed
Method = Big chunk of text x10 runs
Seconds elapsed for dynamic_int4.onnx: 45.37 (this model)
Seconds elapsed for opt_f32.onnx: 46.07 (base f32 model preprocessed for quantization)
Seconds elapsed for dynamic_uint8.onnx: 34.61 (probably the one you want to use on CPU)
Verdict: This model kinda sucks on CPU. Let me know how it is on GPU please.
## Accuracy
I used beir-qdrant with the scifact dataset.
This retrieval benchmark isn't the greatest result.
I welcome any additional benchmarks by the community, please feel free to share any further results.
If someone wants to sponsor me with an NVIDIA GPU I can have a much faster turnaround time with my model experiments and explore some different quantization strategies.
onnx f32 model with f32 output (baseline):
```
ndcg: {'NDCG@1': 0.57, 'NDCG@3': 0.65655, 'NDCG@5': 0.68177, 'NDCG@10': 0.69999, 'NDCG@100': 0.72749, 'NDCG@1000': 0.73301}
recall: {'Recall@1': 0.53828, 'Recall@3': 0.71517, 'Recall@5': 0.77883, 'Recall@10': 0.83056, 'Recall@100': 0.95333, 'Recall@1000': 0.99667}
precision: {'P@1': 0.57, 'P@3': 0.26111, 'P@5': 0.17467, 'P@10': 0.09467, 'P@100': 0.01083, 'P@1000': 0.00113}
```
onnx dynamic int4/uint8 model with f32 output (this model's parent):
```
ndcg: {'NDCG@1': 0.55333, 'NDCG@3': 0.6491, 'NDCG@5': 0.6674, 'NDCG@10': 0.69277, 'NDCG@100': 0.7183, 'NDCG@1000': 0.72434}
recall: {'Recall@1': 0.52161, 'Recall@3': 0.71739, 'Recall@5': 0.7645, 'Recall@10': 0.83656, 'Recall@100': 0.95, 'Recall@1000': 0.99667}
precision: {'P@1': 0.55333, 'P@3': 0.26222, 'P@5': 0.17067, 'P@10': 0.095, 'P@100': 0.0108, 'P@1000': 0.00113}
```
onnx dynamic int4/uint8 model with uint8 output (this model):
```
ndcg: {'NDCG@1': 0.55333, 'NDCG@3': 0.64613, 'NDCG@5': 0.67406, 'NDCG@10': 0.68834, 'NDCG@100': 0.71482, 'NDCG@1000': 0.72134}
recall: {'Recall@1': 0.52161, 'Recall@3': 0.70961, 'Recall@5': 0.77828, 'Recall@10': 0.81822, 'Recall@100': 0.94333, 'Recall@1000': 0.99333}
precision: {'P@1': 0.55333, 'P@3': 0.25889, 'P@5': 0.17533, 'P@10': 0.09333, 'P@100': 0.01073, 'P@1000': 0.00112}
```
# Example inference/benchmark code and how to use the model with Fastembed
After installing beir-qdrant make sure to upgrade fastembed.
```python
# pip install qdrant_client beir-qdrant
# pip install -U fastembed
from fastembed import TextEmbedding
from fastembed.common.model_description import PoolingType, ModelSource
from beir import util
from beir.datasets.data_loader import GenericDataLoader
from beir.retrieval.evaluation import EvaluateRetrieval
from qdrant_client import QdrantClient
from qdrant_client.models import Datatype
from beir_qdrant.retrieval.models.fastembed import DenseFastEmbedModelAdapter
from beir_qdrant.retrieval.search.dense import DenseQdrantSearch
TextEmbedding.add_custom_model(
model="electroglyph/Qwen3-Embedding-0.6B-onnx-int4",
pooling=PoolingType.DISABLED,
normalization=False,
sources=ModelSource(hf="electroglyph/Qwen3-Embedding-0.6B-onnx-int4"),
dim=1024,
model_file="dynamic_int4.onnx",
)
dataset = "scifact"
url = "https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/{}.zip".format(dataset)
data_path = util.download_and_unzip(url, "datasets")
corpus, queries, qrels = GenericDataLoader(data_folder=data_path).load(split="test")
# IMPORTANT: USE THIS (OR A SIMILAR) QUERY FORMAT WITH THIS MODEL:
for k in queries.keys():
queries[k] = (
f"Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: {queries[k]}"
)
qdrant_client = QdrantClient("http://localhost:6333")
model = DenseQdrantSearch(
qdrant_client,
model=DenseFastEmbedModelAdapter(model_name="Qwen3-Embedding-0.6B-onnx-uint8"),
collection_name="scifact-qwen3-uint8",
initialize=True,
datatype=Datatype.UINT8,
)
retriever = EvaluateRetrieval(model)
results = retriever.retrieve(corpus, queries)
ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)
print(f"ndcg: {ndcg}\nrecall: {recall}\nprecision: {precision}")
```
|
aieng-lab/ModernBERT-large_smell-doc
|
aieng-lab
| 2025-06-16T10:04:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"en",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:04:40Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- answerdotai/ModernBERT-large
pipeline_tag: text-classification
---
# ModernBERT large for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
LandCruiser/sn21_omegaany_1606_2
|
LandCruiser
| 2025-06-16T09:53:00Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-16T09:27:02Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
nagoshidayo/MORTM
|
nagoshidayo
| 2025-06-16T09:12:57Z | 0 | 0 | null |
[
"music-generation",
"transformer",
"MoE",
"ALiBi",
"FlashAttention",
"melody-generation",
"rhythmic-modeling",
"MIDI or Chord to MIDI or Chord",
"region:us"
] | null | 2025-06-16T09:02:25Z |
---
pipeline_tag: MIDI or Chord to MIDI or Chord
tags:
- music-generation
- transformer
- MoE
- ALiBi
- FlashAttention
- melody-generation
- rhythmic-modeling
---
# Model Card for MORTM (Metric-Oriented Rhythmic Transformer for Melodic generation)
MORTM is a Transformer-based model designed for melody generation, with a strong emphasis on metric (rhythmic) structure. It represents music as sequences of pitch, duration, and relative beat positions within a measure (normalized to 96 ticks), making it suitable for time-robust, rhythm-aware music generation tasks.
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
MORTM (Metric-Oriented Rhythmic Transformer for Melodic generation) is a decoder-only Transformer architecture optimized for music generation with rhythmic awareness. It generates melodies measure-by-measure in an autoregressive fashion. The model supports chord-conditional generation and is equipped with the following features:
- Mixture of Experts (MoE) in the feedforward layers for capacity increase and compute efficiency.
- ALiBi (Attention with Linear Biases) for relative positional biasing.
- FlashAttention2 for fast and memory-efficient attention.
- Relative tick-based tokenization (e.g., [Position, Duration, Pitch]) for metric robustness.
- **Developed by:** Koue Okazaki & Takaki Nagoshi
- **Funded by [optional]:** Nihon University, Graduate School of Integrated Basic Sciences
- **Shared by [optional]:** ProjectMORTM
- **Model type:** Transformer (decoder-only with MoE and ALiBi)
- **Language(s) (NLP):** N/A (music domain)
- **License:** MIT
- **Finetuned from model [optional]:** Custom-built from scratch (not fine-tuned from a pretrained LM)
### Model Sources [optional]
- **Repository:** [https://github.com/Ayato964/MORTM](https://github.com/Ayato964/MORTM) *(replace with actual link)*
- **Paper [optional]:** In submission
- **Demo [optional]:** Coming soon
## Uses
### Direct Use
MORTM can generate melodies from scratch or conditionally based on chord progressions. It is ideal for:
- Melody composition in pop, jazz, and improvisational styles.
- Real-time melodic suggestion systems for human-AI co-creation.
- Music education and melody completion tools.
### Downstream Use [optional]
- Style transfer with different chord inputs.
- Harmonization and rhythm-based accompaniment systems.
### Out-of-Scope Use
- Audio-to-audio tasks (e.g., vocal separation).
- Raw audio synthesis (requires additional vocoder).
- Not suitable for genre classification or music recommendation.
## Bias, Risks, and Limitations
As the training dataset is primarily composed of Western tonal music, the model may underperform on:
- Non-tonal, microtonal, or traditional music styles.
- Polyrhythmic or tempo-variable music.
- Genres not sufficiently represented in training data (e.g., Indian classical).
### Recommendations
Generated melodies should be manually reviewed in professional music contexts. Users are encouraged to retrain or fine-tune on representative datasets when applying to culturally specific music.
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("nagoshidayo/mortm")
tokenizer = AutoTokenizer.from_pretrained("nagoshidayo/mortm")
|
En3rGy/GetphatFLUXReality
|
En3rGy
| 2025-06-16T09:10:06Z | 17 | 0 | null |
[
"gguf",
"region:us"
] | null | 2025-05-03T03:18:02Z |
---
title: GetphatFLUXRealityNSFW
emoji: 🖼
colorFrom: purple
colorTo: red
sdk: gradio
sdk_version: 5.25.2
app_file: app.py
pinned: false
hf_oauth: true
hf_oauth_scopes:
- inference-api
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
najmharani/gemma-1b-biography_ver2
|
najmharani
| 2025-06-16T09:02:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T09:02:22Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** najmharani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.05_0.25_epoch1
|
MinaMila
| 2025-06-16T08:51:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T08:50:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aleteian/TerraIncognita-24B
|
Aleteian
| 2025-06-16T08:43:14Z | 0 | 2 | null |
[
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"LatitudeGames/Harbinger-24B",
"ReadyArt/Broken-Tutu-24B-Unslop-v2.0",
"base_model:LatitudeGames/Harbinger-24B",
"base_model:merge:LatitudeGames/Harbinger-24B",
"base_model:ReadyArt/Broken-Tutu-24B-Unslop-v2.0",
"base_model:merge:ReadyArt/Broken-Tutu-24B-Unslop-v2.0",
"region:us"
] | null | 2025-06-16T08:34:25Z |
---
base_model:
- LatitudeGames/Harbinger-24B
- ReadyArt/Broken-Tutu-24B-Unslop-v2.0
tags:
- merge
- mergekit
- lazymergekit
- LatitudeGames/Harbinger-24B
- ReadyArt/Broken-Tutu-24B-Unslop-v2.0
---
# TerraIncognita-24B
TerraIncognita-24B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [LatitudeGames/Harbinger-24B](https://huggingface.co/LatitudeGames/Harbinger-24B)
* [ReadyArt/Broken-Tutu-24B-Unslop-v2.0](https://huggingface.co/ReadyArt/Broken-Tutu-24B-Unslop-v2.0)
## 🧩 Configuration
```yaml
models:
- model: LatitudeGames/Harbinger-24B
parameters:
weight: 1.0
- model: ReadyArt/Broken-Tutu-24B-Unslop-v2.0
parameters:
weight: 1.0
merge_method: della_linear
base_model: LatitudeGames/Harbinger-24B
parameters:
normalize: true
int8_mask: true
dtype: float16
tokenizer:
source: union
chat_template: "chatml"
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Aleteian/TerraIncognita-24B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
lzq677/GeoCode-GPT-V2
|
lzq677
| 2025-06-16T08:14:45Z | 0 | 0 | null |
[
"safetensors",
"llama",
"code",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-06-16T03:17:57Z |
---
license: cc-by-nc-4.0
language:
- en
tags:
- code
---
**GeoCode-GPT: A large language model for geospatial code generation**</br>
Created by Wuhan University Luojia ST Computing Lab. </br>
Cite:</br>
@article{HOU2025104456,</br>
title = {GeoCode-GPT: A large language model for geospatial code generation},</br>
journal = {International Journal of Applied Earth Observation and Geoinformation},</br>
volume = {138},</br>
pages = {104456},</br>
year = {2025},</br>
issn = {1569-8432},</br>
doi = {https://doi.org/10.1016/j.jag.2025.104456},</br>
url = {https://www.sciencedirect.com/science/article/pii/S1569843225001037},</br>
author = {Shuyang Hou and Zhangxiao Shen and Anqi Zhao and Jianyuan Liang and Zhipeng Gui and Xuefeng Guan and Rui Li and Huayi Wu}</br>
}
|
openbmb/Ultra-FineWeb-classifier
|
openbmb
| 2025-06-16T08:05:21Z | 0 | 6 | null |
[
"arxiv:2505.05427",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T03:17:35Z |
---
license: apache-2.0
---
# Ultra-FineWeb-Classifier
<div align="center">
<img src="assets/ultra-fineweb-logo.png" width="600"/>
</div>
<!-- <div align="center">
English | [简体中文]()
</div> -->
<div align="center">
[📜 Technical Report](https://arxiv.org/abs/2505.05427)
</div>
## 📚 Introduction
Ultra-FineWeb is a **large-scale, high-quality, and efficiently-filtered dataset**. We use the proposed efficient verification-based high-quality filtering pipeline to the FineWeb and Chinese FineWeb datasets (source data from Chinese FineWeb-edu-v2, which includes IndustryCorpus2, MiChao, WuDao, SkyPile, WanJuan, ChineseWebText, TeleChat, and CCI3), resulting in the creation of higher-quality Ultra-FineWeb-en with approximately 1T tokens, and Ultra-FineWeb-zh datasets with approximately 120B tokens, collectively referred to as Ultra-FineWeb. ***Ultra-FineWeb*** serves as a core pre-training web dataset for the [MiniCPM4 Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
- [Ultra-FineWeb](https://huggingface.co/datasets/openbmb/Ultra-FineWeb): Ultra-FineWeb, a **large-scale, high-quality, and efficiently-filtered dataset**, with 1T English tokens and 120B Chinese tokens.
- [Ultra-FineWeb-classifier](https://huggingface.co/openbmb/Ultra-FineWeb-classifier): Ultra-FineWeb classifier, for filtering high-quality data from web corpora. (**<-- you are here**)
## 📢 What's New
- **[2025.05.09]** **Ultra-FineWeb** technical report is available on [arXiv](https://arxiv.org/abs/2505.05427). 🔥🔥🔥
- **[2025.05.15]** **Ultra-FineWeb** tops the Hugging Face Datasets Trending list, reaching the #1 spot! ⭐️⭐️⭐️
- **[2025.06.06]** **Ultra-FineWeb-en** and **Ultra-FineWeb-zh** datasets are now available on Hugging Face, released alongside the [MiniCPM4 Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
- **[2025.06.16]** The **Ultra-FineWeb-classifier** is now available on Hugging Face: [openbmb/Ultra-FineWeb-classifier](https://huggingface.co/openbmb/Ultra-FineWeb-classifier). 🚀🚀🚀
## 💡 Highlights
> **Abstract:** Data quality has become a key factor in enhancing model performance with the rapid development of large language models (LLMs). Model-driven data filtering has increasingly become a primary approach for acquiring high-quality data. However, it still faces two main challenges: (1) the lack of an efficient data verification strategy makes it difficult to provide timely feedback on data quality; and (2) the selection of seed data for training classifiers lacks clear criteria and relies heavily on human expertise, introducing a degree of subjectivity. To address the first challenge, we introduce an efficient verification strategy that enables rapid evaluation of the impact of data on LLM training with minimal computational cost. To tackle the second challenge, we build upon the assumption that high-quality seed data is beneficial for LLM training, and by integrating the proposed verification strategy, we optimize the selection of positive and negative samples and propose an efficient data filtering pipeline. This pipeline not only improves filtering efficiency, classifier quality, and robustness, but also significantly reduces experimental and inference costs. In addition, to efficiently filter high-quality data, we employ a lightweight classifier based on *fastText*, and successfully apply the filtering pipeline to two widely-used pre-training corpora, *FineWeb* and *Chinese FineWeb* datasets, resulting in the creation of the higher-quality ***Ultra-FineWeb*** dataset. ***Ultra-FineWeb*** contains approximately 1 trillion (T) English tokens and 120 billion (B) Chinese tokens. Empirical results demonstrate that the LLMs trained on Ultra-FineWeb exhibit significant performance improvements across multiple benchmark tasks, validating the effectiveness of our pipeline in enhancing both data quality and training efficiency.
<div align="center">
<img src="assets/ultra-fineweb-pipeline.png" width="600"/>
</div>
- **Efficient Verification Strategy:** We propose a computationally efficient verification strategy that enables rapid evaluation of the impact of data on LLM training performance with minimal computational cost, significantly improving the efficiency of high-quality data filtering experiments.
- **Large-Scale High-Quality Pre-training Datasets:** We design and implement an efficient high-quality data filtering pipeline, applied to the FineWeb and Chinese FineWeb datasets, resulting in the creation of higher-quality datasets, which can facilitate high-quality LLM training.
- **Lightweight Classifier:** The Ultra-FineWeb classifier significantly reduces inference costs, achieving superior performance on extracted text from the same data source, thus validating the effectiveness of our proposed data filtering pipeline in enhancing data quality and training efficiency.
## 🚀 Usage of Ultra-FineWeb Classifier
### Inference single content
1. Put the content you want to infer into the [`scripts/local_scripts/single_content.txt`](scripts/local_scripts/single_content.txt) file.
2. Run the [`scripts/local_scripts/infer_single_content.py`](scripts/local_scripts/infer_single_content.py) script to infer the content:
```bash
# set the language you want to infer, support: en, zh
LANGUAGE=en
# set the tokenizer path, default: local_tokenizer
# user can also directly use "deepseek-ai/DeepSeek-V2"
TOKENIZER_PATH=local_tokenizer
# set the content file path, default: scripts/local_scripts/single_content.txt
CONTENT_FILE=scripts/local_scripts/single_content.txt
python scripts/local_scripts/infer_single_content.py --language ${LANGUAGE} --tokenizer-path ${TOKENIZER_PATH} --content-file ${CONTENT_FILE}
```
Then you can get the result in the terminal, such as:
```text
Content: {User's input content}
Normalized content: {Normalized content}
- Pred label: {Pred label}
- Pred score: {Pred score}
```
### Inference folder
Assume the input folder is `data/input`, the key of the content is `content`, and the output folder is `data/output`. User can run the [`scripts/local_scripts/infer_folder.py`](scripts/local_scripts/infer_folder.py) script to infer the folder:
```bash
# set the language you want to infer, support: en, zh
LANGUAGE=en
# set the data path
DATA_PATH=data/input
# set the save path
SAVE_PATH=data/output
# set the content key
CONTENT_KEY=content
# bellow are optional arguments
# set the tokenizer path, default: local_tokenizer
TOKENIZER_PATH=local_tokenizer
# set the processes number, default: 64
PROCESSES_NUM=64
# set the write batch size, default: 100
WRITE_BATCH_SIZE=100
python scripts/local_scripts/infer_folder.py \
--language ${LANGUAGE} \
--data-path ${DATA_PATH} \
--save-path ${SAVE_PATH} \
--content-key ${CONTENT_KEY} \
--tokenizer-path ${TOKENIZER_PATH} \
--processes-num ${PROCESSES_NUM} \
--write-batch-size ${WRITE_BATCH_SIZE} \
[--inplace] # optional, delete the processed data and re-process the data
```
For Spark inference, we also provide [`scripts/spark_scripts/spark_infer.py`](scripts/spark_scripts/spark_infer.py), a demo script for users to run on the Spark cluster.
**NOTE:**
- The `numpy` version should be lower than 2.0 for the `fasttext` package.
- The `config.json` file is a fake config file, the parameters are used for the `fasttext` training.
## ❤️ Acknowledgements
- The ***Ultra-FineWeb classifier*** is built based on [fastText](https://fasttext.cc/).
- The ***Ultra-FineWeb-en dataset*** is built based on [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
- The ***Ultra-FineWeb-zh dataset*** is constructed based on [IndustryCorpus2](https://huggingface.co/datasets/BAAI/IndustryCorpus2), [MiChao](https://opendatalab.com/OpenDataLab/MiChao), [WuDao](https://data.baai.ac.cn/details/WuDaoCorporaText), [SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B), [WanJuan](https://opendatalab.com/OpenDataLab/WanJuanCC), [ChineseWebText](https://huggingface.co/datasets/CASIA-LM/ChineseWebText2.0), [TeleChat](https://huggingface.co/datasets/Tele-AI/TeleChat-PTD), and [CCI3](https://huggingface.co/datasets/BAAI/CCI3-Data).
Thanks for their awesome work! Open-source contributions make Ultra-FineWeb possible! 🙌
## 🌟 Citation
If you find our work useful, please consider citing:
```bibtex
@misc{wang2025ultrafineweb,
title={{Ultra-FineWeb}: Efficient Data Filtering and Verification for High-Quality LLM Training Data},
author={Yudong Wang and Zixuan Fu and Jie Cai and Peijun Tang and Hongya Lyu and Yewei Fang and Zhi Zheng and Jie Zhou and Guoyang Zeng and Chaojun Xiao and Xu Han and Zhiyuan Liu},
year={2025},
eprint={2505.05427},
archivePrefix={arXiv},
primaryClass={cs.CL},
}
```
## 💳 License
This project is released under the [Apache 2.0](./LICENSE). Please note that since ***Ultra-FineWeb*** is built using multiple datasets, users should check the **LICENSE of each dataset individually** to ensure proper usage and compliance.
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.15_epoch1
|
MinaMila
| 2025-06-16T07:48:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T07:46:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aieng-lab/bert-large-cased_review-aspect
|
aieng-lab
| 2025-06-16T07:34:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T07:34:44Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bert-large-cased
pipeline_tag: text-classification
---
# BERT large for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [bert-large-cased](https://huggingface.co/bert-large-cased)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
xfjcoder/llama3.1-8b-erged-6bit
|
xfjcoder
| 2025-06-16T07:31:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T07:31:17Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** xfjcoder
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FlameF0X/SnowflakeCore-G0-Release-3-1B
|
FlameF0X
| 2025-06-16T07:26:20Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-15T19:54:04Z |
---
license: apache-2.0
---
# 🔬 SnowflakeCore-G0-Release-3 Architecture Size Report
## Summary
This document provides a detailed breakdown of the parameter count and structural design of the **SnowflakeCore-G0-Release-3** model. SnowflakeCore-G0-Release-3 is a custom decoder-only transformer model built from scratch, designed for autoregressive language modeling with rotary positional embeddings (RoPE).
---
## 📐 Model Architecture Overview
| Component | Value |
|------------------------|-----------------|
| Architecture Type | Decoder-only Transformer |
| Hidden Size (d_model) | 1536 |
| Number of Layers | 32 |
| Attention Heads | 16 |
| Feedforward Dim (d_ff) | 6144 |
| Max Sequence Length | 2048 |
| Positional Encoding | Rotary (RoPE) |
| Vocabulary Size | 50,000 (assumed)|
| Total Parameters | **≈ 1.06 Billion** |
---
## 🧮 Parameter Count Breakdown
### 1. Embedding Layers
- **Token Embedding**: V × d = 50,000 × 1536 = 76.8M
- **Output Projection**: d × V = 1536 × 50,000 = 76.8M
**Total**:
```
P_embedding = 2 · 1536 · 50,000 = 153.6M
```
---
### 2. Transformer Blocks
Each of the 32 layers contains:
- **Multi-Head Attention (Q, K, V, Out)**:
4 · d² = 4 · 1536² = 9.44M
- **Feedforward Network (MLP)**:
2 · d · d_ff = 2 · 1536 · 6144 = 18.87M
- **Total per Layer**:
```
9.44M + 18.87M = 28.31M
```
- **Total across 32 layers**:
```
32 · 28.31M = 905.97M
```
---
### 3. Positional Embedding
- **Type**: Rotary Positional Embeddings (RoPE)
- **Parameter Count**: **0** (non-learned, sinusoidal basis)
---
## 📊 Final Parameter Estimate
```
Total Parameters ≈ P_embedding + P_transformer = 153.6M + 905.97M = 1,059.6M
```
---
## 🧠 Training Regime (Contextual)
| Item | Value |
|-----------------------------|-------------------|
| Training Dataset Size | ~2 million rows |
| Max Tokens per Sequence | 2048 |
| Effective Batch Size | 32 × 4 = 128 |
| Number of Epochs | 15 |
| Optimizer | AdamW |
| Learning Rate | 3 × 10⁻⁴ |
Approximate number of tokens:
```
2M × avg_tokens_per_row ≤ 4B tokens
```
---
## 🧾 Notes
- SnowflakeCore-G0-Release-3 exceeds the size of GPT-2 Large (~774M parameters).
- With RoPE and 32 layers, the model is well-positioned for long-range generalization.
- This parameter size is consistent with the compute-optimal design frontier for mid-scale language models.
---
## 📦 Conclusion
**SnowflakeCore-G0-Release-3** is a rigorously engineered, 1.06B parameter language model with modern architectural choices (RoPE, deep stack, wide FFN) that position it as a strong open foundation model for further research, deployment, and extension.
|
imrahulwarkade/mistral-toneop-finetuned
|
imrahulwarkade
| 2025-06-16T07:05:44Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2025-06-16T07:04:28Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
namdp-ptit/LLamaRE-8B-Instruct-ZeroShot
|
namdp-ptit
| 2025-06-16T07:05:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"transformer",
"classification",
"token-classification",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-16T06:09:47Z |
---
license: apache-2.0
language:
- en
base_model:
- unsloth/Meta-Llama-3.1-8B-Instruct
pipeline_tag: token-classification
library_name: transformers
tags:
- transformer
- classification
---
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('namdp-ptit/LLamaRE-8B-Instruct-ZeroShot')
model = AutoModelForCausalLM.from_pretrained(
'namdp-ptit/LLamaRE-8B-Instruct-ZeroShot',
torch_dtype="auto",
device_map="cuda",
)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = model.config.eos_token_id
user_prompt = """
Extract relationships between entities in text **strictly using ONLY the provided Relationship List** below and **MUST** strictly adhere to the output format.
Format each relationship as '<relation_type>: <head_entity>, <tail_entity>' and separated multiple relationship by '|'. Return 'None' if no relationships are identified.
Relationship List: {re_labels}
Text: {text}
"""
query = 'An art exhibit at the Hakawati Theatre in Arab east Jerusalem was a series of portraits of Palestinians killed in the rebellion.'
re_labels = ["Organization based in", "Located in", "Live in", "Work for", "Kill"]
user_prompt = user_prompt.format(re_labels=ner_labels, text=query)
messages = [
{
"role": "system",
"content": "You are an expert in Relation Extraction (RE) task."
},
{
"role": "user",
"content": user_prompt
}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer(text, return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response) # Organization based in: Hakawati Theatre, Jerusalem
```
## Contact
**Email**: [email protected]
**LinkedIn**: [Dang Phuong Nam](https://www.linkedin.com/in/dang-phuong-nam-157912288/)
**Facebook**: [Phương Nam](https://www.facebook.com/phuong.namdang.7146557)
## Support The Project
If you find this project helpful and wish to support its ongoing development, here are some ways you can contribute:
1. **Star the Repository**: Show your appreciation by starring the repository. Your support motivates further
development
and enhancements.
2. **Contribute**: We welcome your contributions! You can help by reporting bugs, submitting pull requests, or
suggesting new features.
3. **Donate**: If you’d like to support financially, consider making a donation. You can donate through:
- Vietcombank: 9912692172 - DANG PHUONG NAM
Thank you for your support!
## Citation
Please cite as
```Plaintext
@misc{LlamaRE-8B-Instruct-ZeroShot,
title={LlamaRE: An Large Language Model for Relation Extraction},
author={Nam Dang Phuong},
year={2025},
publisher={Huggingface},
}
```
|
cucucu666/huanhu-6.16
|
cucucu666
| 2025-06-16T06:37:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-16T03:41:06Z |
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: labii face, Crayon Shin-chan style, cheerful expression, big smile,
open mouth, plain color background
widget:
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_0.png
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_1.png
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_2.png
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - cucucu666/huanhu-6.16
<Gallery />
## Model description
These are cucucu666/huanhu-6.16 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth, plain color background` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](cucucu666/huanhu-6.16/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cucucu666/huanhu-6.16', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth, plain color background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
prithivMLmods/Ross-640-BMath-1.5B-GGUF
|
prithivMLmods
| 2025-06-16T06:30:52Z | 215 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"math",
"text-generation",
"en",
"base_model:prithivMLmods/Ross-640-BMath-1.5B",
"base_model:quantized:prithivMLmods/Ross-640-BMath-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-11T12:19:02Z |
---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Ross-640-BMath-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
---
# **Ross-640-BMath-1.5B-GGUF**
> **Ross-640-BMath-1.5B** is an **experimental, high-precision math explanation model** fine-tuned on **Qwen2-1.5B**, designed to provide **step-by-step mathematical derivations** and **detailed concept explanations** across a wide range of mathematical domains. It is **not optimized for general reasoning or conversation**, and focuses primarily on **structured, non-reasoning math workflows** including algebra, calculus, number theory, and combinatorics.
## Model Files
| File Name | Size | Format | Description |
|-----------|------|--------|-------------|
| Ross-640-BMath-1.5B.F32.gguf | 6.18 GB | F32 | Full precision 32-bit floating point |
| Ross-640-BMath-1.5B.F16.gguf | 3.09 GB | F16 | Half precision 16-bit floating point |
| Ross-640-BMath-1.5B.BF16.gguf | 3.09 GB | BF16 | Brain floating point 16-bit |
| Ross-640-BMath-1.5B.Q8_0.gguf | 1.65 GB | Q8_0 | 8-bit quantized |
| Ross-640-BMath-1.5B.Q6_K.gguf | 1.27 GB | Q6_K | 6-bit quantized |
| Ross-640-BMath-1.5B.Q5_K_M.gguf | 1.13 GB | Q5_K_M | 5-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q5_K_S.gguf | 1.1 GB | Q5_K_S | 5-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q4_K_M.gguf | 986 MB | Q4_K_M | 4-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q4_K_S.gguf | 940 MB | Q4_K_S | 4-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q3_K_L.gguf | 880 MB | Q3_K_L | 3-bit quantized, large quality |
| Ross-640-BMath-1.5B.Q3_K_M.gguf | 824 MB | Q3_K_M | 3-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q3_K_S.gguf | 761 MB | Q3_K_S | 3-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q2_K.gguf | 676 MB | Q2_K | 2-bit quantized |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
parveen-ka-viral-video/Original.18.parveen.viral.video.bilasipara.new.video.parbin.bilasipara.viral.video.link
|
parveen-ka-viral-video
| 2025-06-16T05:40:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T05:36:14Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
s-emanuilov/Tucan-2.6B-v1.0
|
s-emanuilov
| 2025-06-16T05:35:49Z | 187 | 1 | null |
[
"safetensors",
"gemma2",
"function_calling",
"MCP",
"tool_use",
"bg",
"arxiv:2503.23278",
"arxiv:2408.00118",
"arxiv:2412.10893",
"base_model:INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0",
"base_model:finetune:INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0",
"license:gemma",
"region:us"
] | null | 2025-06-07T21:26:39Z |
---
license: gemma
language:
- bg
base_model:
- INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0
tags:
- function_calling
- MCP
- tool_use
---
# Tucan-2.6B-v1.0
## Bulgarian Language Models for Function Calling 🇧🇬
> 📄 **Full methodology, dataset details, and evaluation results coming in the upcoming paper**
## Overview 🚀
TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.
These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications.
Built on top of [BgGPT models](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe) from [INSAIT Institute](https://insait.ai/), which were themselves built on [Gemma 2](https://arxiv.org/pdf/2408.00118), Tucan models have been enhanced with function-calling capabilities.
## Motivation 🎯
Although BgGPT models demonstrate [strong Bulgarian language comprehension](https://arxiv.org/pdf/2412.10893), they face challenges in maintaining the precise formatting necessary for consistent function calling. Despite implementing detailed system prompts, their performance in this specific task remains suboptimal.
This project addresses that gap by fine-tuning BgGPT, providing the Bulgarian AI community with proper tool-use capabilities in their native language.
## Models and variants 📦
Available in three sizes with full models, LoRA adapters, and quantized GGUF variants:
<div align="center">
| Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
|------------|------------|--------------|------------------|
| **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0) 📍| [LoRA](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-GGUF) |
| **9B** | [Tucan-9B-v1.0](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-GGUF) |
| **27B** | [Tucan-27B-v1.0](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-GGUF) |
*GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
📍 *Current model/repo*
</div>
Models and quantizations are also available for easy use in Ollama: https://ollama.com/s_emanuilov/tucan
## Benchmarks 📊
All evaluations were performed using the [Tucan evaluation framework](https://github.com/s-emanuilov/tucan), with results averaged across multiple runs. Tucan models demonstrate superior function-calling capabilities compared to their BgGPT counterparts, with particularly strong improvements in smaller model sizes. To ensure no catastrophic forgetting occurred, we evaluated knowledge retention using [EleutherAI's lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) on Bulgarian benchmarks, confirming that each Tucan model maintains performance on par with its BgGPT equivalent.
<div align="center">
| Model | Function Calling | HellaswagBG | WinograndeBG | ARC-Easy-BG | ARC-Challenge-BG |
|-------|-----------------|-------------|--------------|-------------|------------------|
| **Tucan-2.6B-v1.0** 🔥 | **0.7875** | 0.5924 | 0.6456 | 0.5657 | 0.3754 |
| **Tucan-9B-v1.0** 🔥 | **0.8667** | 0.7046 | 0.7151 | 0.7024 | 0.5188 |
| **Tucan-27B-v1.0** 🔥 | **0.875** | 0.6179 | 0.6275 | 0.6486 | 0.442 |
| BgGPT-Gemma-2-2.6B-IT-v1.0 | 0.5874 | 0.6306 | 0.5821 | 0.5657 | 0.372 |
| BgGPT-Gemma-2-9B-IT-v1.0 | 0.7833 | 0.7057 | 0.719 | 0.7231 | 0.5188 |
| BgGPT-Gemma-2-27B-IT-v1.0 | 0.8667 | 0.62 | 0.6212 | 0.6587 | 0.459 |
*Note: 27B models were evaluated in 8-bit precision for comparison purposes.*
</div>
## Usage 🛠️
### Quick start ⚡
```bash
pip install -U "transformers[torch]" accelerate bitsandbytes
```
### Prompt format ⚙️
**Critical:** Use this format for function calling for the best results.
<details>
<summary><strong>📋 Required system prompt template</strong></summary>
```
<bos><start_of_turn>user
Ти си полезен AI асистент, който предоставя полезни и точни отговори.
Имаш достъп и можеш да извикаш една или повече функции, за да помогнеш с потребителското запитване. Използвай ги, само ако е необходимо и подходящо.
Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```.
## Шаблон за извикване:
```tool_call
{"name": <function-name>, "arguments": <args-json-object>}```
## Налични функции:
[your function definitions here]
## Потребителска заявка:
[your query in Bulgarian]<end_of_turn>
<start_of_turn>model
```
</details>
### Note 📝
**The model only generates the `tool_call` blocks with function names and parameters - it doesn't actually execute the functions.** Your client application must parse these generated calls, execute the actual functions (API calls, database queries, etc.), and provide the results back to the model in `tool_response` blocks for the conversation to continue the interperation of the results. A full demo is comming soon.
### Python example 🐍
<details>
<summary><strong>💻 Complete Working Example</strong></summary>
```python
import torch
import json
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
# Load model
model_name = "s-emanuilov/Tucan-2.6B-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="eager" # Required for Gemma models
)
# Create prompt with system template
def create_prompt(functions, user_query):
system_prompt = """Ти си полезен AI асистент, който предоставя полезни и точни отговори.
Имаш достъп и можеш да извикаш една или повече функции, за да помогнеш с потребителското запитване. Използвай ги, само ако е необходимо и подходящо.
Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```.
## Шаблон за извикване:
```tool_call
{{"name": <function-name>, "arguments": <args-json-object>}}```
"""
functions_text = json.dumps(functions, ensure_ascii=False, indent=2)
full_prompt = f"{system_prompt}\n## Налични функции:\n{functions_text}\n\n## Потребителска заявка:\n{user_query}"
chat = [{"role": "user", "content": full_prompt}]
return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# Example usage
functions = [{
"name": "create_calendar_event",
"description": "Creates a new event in Google Calendar.",
"parameters": {
"type": "object",
"properties": {
"title": {"type": "string"},
"date": {"type": "string"},
"start_time": {"type": "string"},
"end_time": {"type": "string"}
},
"required": ["title", "date", "start_time", "end_time"]
}
}]
query = "Създай събитие 'Годишен преглед' за 8-ми юни 2025 от 14:00 до 14:30."
# Generate response
prompt = create_prompt(functions, query)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=2048,
temperature=0.1,
top_k=25,
top_p=1.0,
repetition_penalty=1.1,
do_sample=True,
eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<end_of_turn>")],
pad_token_id=tokenizer.eos_token_id
)
result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(result)
```
</details>
## Performance & Dataset 📊
> 📄 **Full methodology, dataset details, and comprehensive evaluation results coming in the upcoming paper**
**Dataset:** 10,000+ bilingual (Bulgarian/English) function-calling examples across 1,000+ topics, including tool calls with single/multiple arguments, optional parameters, follow-up queries, multi-tool selection, ambiguous queries requiring clarification, and conversational interactions without tool use. Data sourced from manual curation and synthetic generation (Gemini Pro 2.5/GPT-4.1/Sonnet 4).
**Results:** Significant improvements in tool-use capabilities over base BgGPT models: 34.1% for 2.6B, 10.6% for 9B, and 1.0% for 27B models in [internal benchmarks](https://github.com/s-emanuilov/tucan). Beyond raw function-calling scores, all Tucan models demonstrate more natural conversational flow while maintaining tool-use capabilities, retaining their base knowledge.
## Acknowledgments 🙏
Built on top of [BgGPT series](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe).
## Questions & Contact 💬
For questions, collaboration, or feedback: **[Connect on LinkedIn](https://www.linkedin.com/in/simeon-emanuilov/)**
|
mezzo-fun-viral-video-Original/mezzo.fun.Viral.Video.Original.Clip
|
mezzo-fun-viral-video-Original
| 2025-06-16T05:06:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T05:03:30Z |
<a href="https://t.co/dTvnXACQMR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
japat123/gemma_jun16_1
|
japat123
| 2025-06-16T05:00:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:quantized:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-16T04:59:21Z |
---
base_model: unsloth/gemma-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** japat123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.5_0.25_epoch2
|
MinaMila
| 2025-06-16T04:59:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T04:58:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.