modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-05-11 06:26:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 453
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-05-11 06:26:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
paro-aarti-link/18-video.paro.aarti.viral.video.original.here | paro-aarti-link | "2025-05-09T21:40:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-09T21:38:51Z" | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=paro-aarti)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=paro-aarti)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=paro-aarti) |
icefog72/Ice0.114-09.05-RP | icefog72 | "2025-05-09T21:39:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T21:02:04Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Ice0.114-09.05-RP
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* G:\FModels\Ice0.113-08.05-RP
* E:\FModels\Ice0.107-22.04-RP
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: G:\FModels\Ice0.113-08.05-RP
layer_range: [0, 32]
- model: E:\FModels\Ice0.107-22.04-RP
layer_range: [0, 32]
merge_method: slerp
base_model: G:\FModels\Ice0.113-08.05-RP
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
|
ajagota71/toxicity-reward-model-max-margin-seed-400-pythia-160m-checkpoint-50 | ajagota71 | "2025-05-09T21:39:14Z" | 0 | 0 | null | [
"safetensors",
"gpt_neox",
"region:us"
] | null | "2025-05-09T21:38:52Z" | # toxicity-reward-model-max-margin-seed-400-pythia-160m-checkpoint-50
This model was trained using max_margin IRL to learn toxicity reward signals.
Base model: EleutherAI/pythia-160m
Original model: EleutherAI/pythia-160M
Detoxified model: ajagota71/pythia-160m-detox-epoch-100
---
language: en
tags:
- toxicity
- reward-model
- irl
library_name: transformers
base_model: pythia-160m
pipeline_tag: text-classification
---
|
shengyuanhu/benchmark_wmdp_ga_ckpt_140 | shengyuanhu | "2025-05-09T21:38:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T21:26:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YuanyuanZhang/CGNJAMSIN_conformer | YuanyuanZhang | "2025-05-09T21:37:06Z" | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"nl",
"dataset:cgn",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | "2025-05-09T12:27:40Z" | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: nl
datasets:
- cgn
license: cc-by-4.0
---
## ESPnet2 ASR model
### `YuanyuanZhang/CGNJAMSIN_conformer`
This model was trained by YuanyuanZhang using cgn recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b053cf10ce22901f9c24b681ee16c1aa2c79a8c2
pip install -e .
cd egs2/cgn/cgn_jasmin_train
./run.sh --skip_data_prep false --skip_train true --download_model YuanyuanZhang/CGNJAMSIN_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Sep 19 14:54:50 CEST 2023`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.5`
- pytorch version: `pytorch 1.10.0`
- Git hash: `b053cf10ce22901f9c24b681ee16c1aa2c79a8c2`
- Commit date: `Fri Dec 31 22:36:23 2021 +0900`
## train_asr_conformer_fbankpitch_cgn_g1g2_g1vc2foldgcdc
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_default_asr_model_train.loss.ave_10best/comp_p_g1|350|1197|82.5|12.4|5.0|1.9|19.4|34.6|
|decode_asr_default_asr_model_train.loss.ave_10best/comp_q_g1_new|1213|5282|93.5|5.4|1.1|0.9|7.3|22.4|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_default_asr_model_train.loss.ave_10best/comp_p_g1|350|5776|91.7|3.7|4.6|2.4|10.8|34.6|
|decode_asr_default_asr_model_train.loss.ave_10best/comp_q_g1_new|1213|26213|98.0|1.1|0.9|1.0|3.0|22.4|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_default_asr_model_train.loss.ave_10best/comp_p_g1|350|1419|82.2|11.7|6.1|4.1|21.9|34.6|
|decode_asr_default_asr_model_train.loss.ave_10best/comp_q_g1_new|1213|6317|93.2|4.5|2.2|1.3|8.1|22.4|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_streaming_conformer_org.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/train_asr_conformer_fbankpitch_cgn_g1g2_g1vc2foldgcdc
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 53981
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 35
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: true
freeze_param: []
num_iters_per_epoch: 10000
batch_size: 20
valid_batch_size: null
batch_bins: 10000000
valid_batch_bins: null
train_shape_file:
- exp/stats_cgn_jasming1g2_g1vc2foldgcdc_83dim/train/speech_shape
- exp/stats_cgn_jasming1g2_g1vc2foldgcdc_83dim/train/text_shape.bpe
valid_shape_file:
- exp/stats_cgn_jasming1g2_g1vc2foldgcdc_83dim/valid/speech_shape
- exp/stats_cgn_jasming1g2_g1vc2foldgcdc_83dim/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 512
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/fbank_pitch/cgn_jasming1g2_train_g1vc2fold_dc2fold/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/cgn_jasming1g2_train_g1vc2fold_dc2fold/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/fbank_pitch/dev_s/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/dev_s/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- ▁
- ▁ja
- ▁de
- t
- ▁dat
- ▁uh
- ▁'
- ▁en
- ▁ik
- ▁een
- ▁je
- ▁is
- ▁die
- ▁van
- s
- ▁in
- ▁maar
- ▁niet
- ▁dan
- ▁ook
- ▁op
- en
- ▁wel
- ''''
- ▁ggg
- ▁ze
- ▁nou
- ▁het
- ▁zo
- k
- n
- ▁met
- ▁dus
- ▁te
- r
- ▁voor
- ▁nee
- ▁wat
- e
- ▁we
- ▁of
- ▁nog
- ▁als
- ▁zijn
- ▁was
- ▁aan
- ▁daar
- ▁heb
- ▁d
- ▁oh
- ▁er
- ▁xxx
- ▁bij
- ▁om
- ▁want
- ▁moet
- ▁hij
- ▁naar
- ▁hebben
- ▁heel
- ▁over
- ▁toch
- ▁al
- ▁kan
- '-'
- ▁hè
- ▁uit
- ▁gewoon
- ▁heeft
- ▁had
- d
- m
- ▁weet
- ▁goed
- ▁toen
- ▁echt
- ▁weer
- ▁ie
- ▁uhm
- ▁meer
- ▁nu
- ▁gaan
- de
- ▁hoe
- ▁ben
- ▁door
- ▁doen
- ▁veel
- ▁gaat
- ▁da
- ▁waar
- ▁denk
- ▁geen
- er
- ▁even
- ▁één
- ▁twee
- ▁zou
- ▁dit
- ▁hier
- ▁ver
- ▁ge
- ▁natuurlijk
- ▁eigenlijk
- ▁mij
- ▁zeg
- ▁mee
- ▁vind
- ▁wordt
- ▁wij
- ▁wil
- ▁mensen
- ▁kunnen
- ▁iets
- ▁u
- ▁me
- ▁beetje
- je
- ▁jij
- ▁leuk
- ▁hoor
- ns
- hu
- ▁helemaal
- ▁keer
- ▁be
- ▁allemaal
- ▁ga
- ▁af
- ▁haar
- ▁jaar
- ▁zit
- ▁z
- ▁m
- f
- ▁mm
- ▁zeggen
- ▁altijd
- ▁mijn
- ▁komt
- ▁andere
- ▁zei
- ▁moeten
- ▁zich
- ▁worden
- ▁hele
- p
- ▁alleen
- ▁tegen
- ▁na
- ▁drie
- ken
- u
- ▁tot
- te
- ▁zal
- ▁zegt
- ▁net
- ing
- ▁omdat
- ▁misschien
- ▁komen
- ▁weg
- ▁waren
- ▁zitten
- ▁tijd
- ▁deze
- ▁toe
- ▁staat
- ▁hebt
- ten
- tje
- ▁anders
- ge
- ▁zelf
- ▁onder
- ▁kijken
- ▁ons
- re
- ▁hadden
- ▁erg
- ▁maken
- ▁hun
- ▁niks
- ▁geweest
- ▁dingen
- a
- el
- ▁terug
- ▁zij
- ie
- ▁jullie
- den
- y
- le
- ▁mag
- ▁uur
- g
- c
- ▁volgens
- ste
- ▁man
- ▁zien
- lo
- ▁doe
- ▁alles
- se
- ▁mmm
- ▁oké
- ▁vier
- l
- ▁dag
- ▁ging
- ▁nooit
- ▁kijk
- ▁precies
- o
- ▁eerste
- ▁gedaan
- ▁bedoel
- ▁kun
- ▁lekker
- ▁week
- ig
- ▁mooi
- in
- eren
- ▁moment
- ▁achter
- ▁dacht
- ▁wie
- ▁inderdaad
- op
- ▁moest
- ▁grote
- ▁vond
- ▁binnen
- ▁staan
- ▁verder
- ▁zeker
- b
- ▁tien
- ver
- ▁best
- ▁werd
- ▁doet
- ▁werk
- ▁lang
- jes
- ne
- ▁kon
- ▁alle
- ers
- ▁huis
- ▁soort
- ▁zie
- ▁vijf
- len
- ▁laat
- ▁kwam
- ▁willen
- ▁kinderen
- ▁hem
- ▁nederland
- ▁iemand
- ri
- ▁laten
- ▁jou
- sen
- heid
- ▁steeds
- ▁kunt
- al
- ▁elkaar
- ▁paar
- ▁eerst
- ▁geval
- ▁bent
- ke
- ▁kom
- ▁effe
- ▁weten
- me
- ▁zat
- ▁a
- ▁bijvoorbeeld
- 'on'
- ▁gehad
- ▁tussen
- ▁zoals
- ▁re
- ij
- ro
- ▁waarom
- ▁vaak
- la
- be
- ▁nieuwe
- ▁gezegd
- ▁krijgen
- ▁zoiets
- v
- ▁half
- ▁bijna
- ▁hé
- ▁moeder
- ▁thuis
- ▁geloof
- ▁auto
- tjes
- ▁school
- ▁ieder
- ma
- z
- ▁gegeven
- ▁eigen
- ▁gezien
- ▁geld
- or
- nen
- us
- ▁ah
- ▁per
- gen
- il
- ▁vraag
- ren
- ▁bal
- ▁maakt
- ▁pas
- ▁leven
- ▁vandaag
- h
- ui
- ▁g
- aan
- ▁c
- ▁beter
- ▁iedereen
- es
- ▁ziet
- ur
- ▁zin
- ▁b
- ▁tweede
- ul
- an
- ve
- ▁ligt
- ▁krijg
- ▁ander
- ▁groot
- ▁honderd
- ▁heet
- ▁hand
- ▁lijkt
- end
- ▁eten
- ▁laatste
- ▁vader
- ▁werken
- bo
- w
- zen
- ▁morgen
- ▁minder
- pen
- ti
- ▁vragen
- ▁boven
- ▁sch
- ▁vast
- ▁zes
- ▁vinden
- eerd
- ra
- ▁stond
- ▁gulden
- ▁daarom
- ▁f
- ▁bezig
- ▁onze
- elijk
- ▁samen
- ▁buiten
- ▁snel
- ▁e
- ▁vrij
- ▁v
- ▁nodig
- ster
- der
- ▁zeven
- ▁geven
- na
- ▁kamer
- ▁wou
- man
- per
- ▁zoveel
- ▁hoofd
- ▁ro
- ▁acht
- ▁aantal
- ▁krijgt
- sch
- ▁vol
- ▁gemaakt
- ▁god
- ▁praten
- ▁zullen
- ▁rond
- ▁zag
- ▁heen
- ▁wilde
- atie
- ▁eens
- ▁manier
- cht
- ter
- ▁vrouw
- ▁goeie
- ▁zouden
- ▁minuten
- ▁denken
- ▁houden
- em
- ▁wist
- ▁wanneer
- ▁volgende
- ▁later
- ▁meteen
- ker
- ▁klein
- ▁water
- ▁weinig
- ▁blijven
- ▁on
- ▁vooral
- ▁plaats
- ▁terwijl
- ▁hoop
- ▁daarna
- ▁ergens
- ol
- i
- ▁lopen
- ▁juist
- ven
- ▁idee
- ige
- do
- ze
- ▁ont
- ▁vroeg
- ▁boek
- ingen
- ▁geleden
- ▁moeilijk
- ho
- ▁graag
- ▁kleine
- li
- ▁nemen
- ▁soms
- ▁halen
- we
- ▁open
- ▁kant
- he
- oor
- ▁onderzoek
- ▁klopt
- ▁negen
- ▁gek
- uw
- as
- ▁druk
- ▁elke
- ▁stad
- ▁begin
- ▁sp
- ▁ha
- ▁dagen
- ▁duidelijk
- aar
- ▁waarschijnlijk
- ▁mar
- ▁eind
- men
- co
- ▁kind
- ▁vindt
- ▁land
- ga
- is
- ▁ma
- baar
- ▁lezen
- ▁zelfs
- af
- lijk
- ▁negentien
- ▁genoeg
- ▁hard
- ck
- aal
- ▁langs
- at
- ▁liggen
- ▁blijft
- ▁jaren
- ha
- ▁st
- da
- ▁jouw
- 'no'
- ag
- ▁vroeger
- ▁welke
- ▁stuk
- ▁via
- gel
- uren
- rij
- ▁gelijk
- ▁trouwens
- ▁nederlands
- x
- ▁zetten
- di
- ▁minister
- ▁tuurlijk
- ▁prijs
- ▁deed
- ▁vakantie
- ▁klaar
- ar
- ▁ouders
- ▁hu
- ▁weken
- ▁jongen
- ▁twintig
- it
- land
- eg
- ▁mooie
- ▁joh
- ▁woord
- ▁gingen
- ▁zonder
- ▁ding
- ▁geeft
- am
- ▁dertig
- ▁men
- ▁vijftig
- ▁gegaan
- komen
- ▁la
- ▁belangrijk
- ▁valt
- ▁sta
- ▁amsterdam
- ▁probleem
- ▁wereld
- ▁le
- ▁ca
- ▁gemeente
- ▁leuke
- eld
- ▁politie
- ▁ken
- uit
- ▁dood
- ▁mogelijk
- ik
- ed
- ▁geworden
- om
- nie
- ▁vertellen
- ▁bed
- ▁oud
- ▁pa
- ▁mevrouw
- ad
- ▁hoeveel
- ▁eerder
- ▁gebeurt
- ▁jo
- ▁punt
- ▁jawel
- va
- ▁hoeft
- ▁kreeg
- ende
- ▁werkt
- ▁niemand
- ▁ach
- ▁vorige
- ▁loopt
- vo
- ▁groep
- ▁bellen
- ië
- ▁meneer
- ▁snap
- ▁p
- ▁hoog
- ba
- ▁plan
- elijke
- ▁twaalf
- ▁ra
- ▁verhaal
- ▁vanaf
- ▁con
- ▁procent
- ▁straks
- ▁hartstikke
- ▁horen
- lie
- ▁hou
- ▁po
- ▁bel
- ▁telefoon
- ▁zaterdag
- pe
- ▁nederlandse
- ▁ho
- ▁nieuw
- mo
- ▁gezellig
- ▁rest
- ▁slecht
- ▁licht
- ▁meestal
- ▁stukje
- ▁betekent
- ka
- ▁film
- ▁hum
- ▁gehoord
- ▁h
- ▁nummer
- ▁euro
- ▁weekend
- ▁foto
- ia
- ▁afgelopen
- gaan
- ▁alweer
- ▁deur
- ant
- ▁tenminste
- ▁spelen
- ▁uitge
- ger
- to
- rin
- ▁blij
- ▁vrijdag
- ch
- ic
- ▁w
- ling
- uur
- ja
- ▁naast
- ▁klas
- ▁elf
- ▁fiets
- ▁i
- ▁grond
- wi
- val
- ▁beginnen
- ▁allerlei
- rbij
- ap
- ▁gekomen
- ▁naam
- ▁gisteren
- ▁deel
- wa
- che
- ▁kort
- ▁moesten
- ▁gesprek
- ▁jonge
- ko
- ▁betreft
- ▁bus
- nd
- dat
- ▁heer
- ▁kost
- ▁gesproken
- ▁ko
- ▁familie
- ▁dicht
- ▁gevoel
- ▁voorstellen
- ce
- ▁mis
- ▁normaal
- ▁kwamen
- ▁computer
- ▁kopen
- ▁grappig
- ▁uiteindelijk
- ▁mekaar
- ▁ongeveer
- ▁leren
- bi
- ▁ogen
- ▁zondag
- slag
- eert
- ▁uw
- eur
- huis
- ▁kijkt
- ei
- ▁begint
- ▁proberen
- ▁neem
- ▁o
- ▁vijftien
- sta
- ▁pi
- ▁bepaalde
- ine
- ▁wacht
- ▁gekregen
- ▁werden
- duizend
- ▁ar
- ▁duur
- ▁woorden
- ▁vanuit
- ijn
- hal
- wo
- ▁zaken
- ▁nul
- ▁mond
- ▁pro
- ▁hen
- erd
- ist
- ▁co
- kel
- ▁opge
- dig
- ▁houdt
- ▁kerk
- ▁ruimte
- ▁voorzitter
- ▁mogen
- kant
- ▁avonds
- ▁no
- weg
- ef
- ▁trein
- ▁avond
- ▁wachten
- ▁rijden
- ▁denkt
- ▁enige
- ▁verschillende
- ▁makkelijk
- ▁tafel
- ▁raar
- um
- ak
- werk
- ▁kop
- ci
- ▁vorig
- ▁jammer
- ▁bekend
- sel
- est
- ▁hoef
- ▁rustig
- ▁bedrijf
- ▁tra
- ▁winkel
- ▁muziek
- ▁betalen
- ▁tuin
- ▁ooit
- schap
- ▁mi
- ▁midden
- ale
- ▁straat
- ▁miljoen
- ▁mens
- ▁extra
- over
- ▁antwoord
- ▁meter
- ▁zaten
- ▁fout
- ▁nacht
- ▁lange
- ▁mo
- ▁boeken
- ▁daarvoor
- ding
- ▁gebeuren
- ▁wonen
- ▁an
- ▁lucht
- ▁pe
- ▁daarmee
- ca
- bouw
- ▁vrouwen
- ▁maandag
- ▁namelijk
- za
- bra
- ▁bank
- ▁buurt
- tijd
- ▁maanden
- ▁jongens
- ▁stel
- ▁word
- ▁bo
- ▁absoluut
- ▁stil
- ▁her
- ruit
- ▁tijdens
- ▁ouwe
- ▁ontzettend
- ▁vanavond
- ▁speelt
- vel
- ▁sommige
- stelling
- ▁lag
- ▁praat
- ▁ger
- ▁elk
- ▁meisje
- ▁gelukkig
- ▁liep
- ▁taal
- ▁hoorde
- ▁zeer
- ▁rij
- ▁vers
- ▁maand
- ▁li
- ▁kwijt
- leg
- ▁eventjes
- ▁sowieso
- ▁overal
- ▁begonnen
- zi
- ▁gezicht
- ▁beste
- ▁bos
- for
- ▁recht
- ▁gebruiken
- ▁wedstrijd
- ▁kl
- ▁dezelfde
- onder
- ▁oude
- stand
- ▁zomer
- ▁derde
- go
- ▁problemen
- ische
- sche
- sc
- recht
- ▁geef
- ▁go
- ▁brand
- ▁gebied
- ▁fijn
- ▁terecht
- ▁iedere
- ▁prima
- ▁hart
- ▁mannen
- ▁brengen
- mel
- ▁naartoe
- komt
- houden
- ▁stem
- ▁les
- ▁schrijven
- ▁hoort
- ton
- ▁zoeken
- ▁los
- ▁dorp
- ▁du
- ▁neer
- ▁sa
- voer
- ▁noemen
- ▁aardig
- ▁last
- licht
- ▁gekocht
- ▁donderdag
- ▁gevraagd
- ▁spreken
- ▁he
- rop
- ▁so
- ▁richting
- mi
- ▁gister
- ▁eentje
- ni
- ai
- ▁beneden
- ▁regen
- ▁mocht
- ▁se
- ▁noem
- ▁bar
- ▁zorgen
- sten
- ering
- eel
- ▁konden
- lijn
- ▁collega
- ▁begon
- ug
- ning
- heden
- ▁hallo
- ▁waarin
- ▁pakken
- ▁meeste
- lopen
- ▁rol
- raad
- eerde
- ler
- ▁zaak
- ▁lachen
- mon
- ▁mama
- ▁vak
- ▁verteld
- ▁ki
- ▁afge
- ▁sinds
- ▁gang
- ▁begrijp
- dra
- ▁zwaar
- ▁plek
- ▁gebruik
- ▁vijfentwintig
- ▁orde
- ▁koffie
- ▁sorry
- ring
- ▁kn
- ▁veertig
- ▁kwart
- ▁gevonden
- ▁vorm
- reken
- ▁reden
- ▁blijkt
- vol
- ▁post
- ▁ta
- ▁hond
- pu
- aat
- ▁ad
- ▁kilometer
- ▁ru
- ▁zuid
- ▁slapen
- ▁alsof
- gro
- ▁bang
- tzelfde
- ▁ruim
- ▁langer
- ▁ba
- leiding
- ▁warm
- ▁gezin
- ▁zak
- stra
- lang
- ll
- chi
- ▁gebruikt
- ▁gebeld
- ▁viel
- ▁zorg
- gang
- aard
- ▁fi
- ▁gra
- ▁keek
- ▁handen
- ▁maak
- ▁baan
- ▁opnieuw
- ▁internet
- oord
- ier
- ▁su
- ▁boer
- ▁vraagt
- lig
- po
- por
- ▁lukt
- ▁ex
- ▁pak
- q
- ▁loop
- gekomen
- ▁daarvan
- ▁goede
- ▁handig
- ▁aange
- ▁gebeurd
- ▁brief
- ▁kra
- ▁sterk
- nis
- os
- th
- ▁tegenwoordig
- ▁oma
- ▁neemt
- ▁onderwijs
- ▁woensdag
- gesteld
- leden
- ff
- ▁zichzelf
- ▁merk
- tal
- ▁kaart
- ▁trap
- ▁laatst
- ▁geweldig
- boek
- ly
- ▁super
- oom
- ber
- elen
- ▁ju
- ▁nijmegen
- ▁gelezen
- geven
- of
- io
- ac
- ies
- lijke
- gezet
- gra
- ▁doel
- spel
- ▁tenten
- ▁dieren
- ▁kennen
- ▁liever
- ging
- ▁frank
- ▁leg
- ▁goh
- ▁feit
- avond
- ▁dinsdag
- ▁verschil
- ▁bloed
- oe
- rom
- ▁woont
- ▁allebei
- ▁oorlog
- zer
- ▁verwacht
- ▁contact
- ▁vergeten
- ▁helft
- leid
- halve
- dag
- ▁achttien
- ▁informatie
- ▁algemeen
- ▁leerlingen
- blo
- ▁stellen
- ▁genomen
- ▁vanmiddag
- isch
- ▁krant
- ▁stap
- zette
- ▁keuken
- ▁direct
- ip
- ▁leggen
- gaat
- ▁kast
- ▁zodat
- ▁inmiddels
- ▁verjaardag
- id
- mmen
- lij
- ▁l
- ▁anderen
- ▁stonden
- ▁hoge
- lu
- kamer
- ▁redelijk
- ▁schoon
- ▁feest
- ▁koop
- ▁actie
- ▁voorbeeld
- ders
- ▁vrienden
- ▁enkele
- ▁daarbij
- achtig
- ▁rotterdam
- ▁ged
- ▁vervelend
- ▁scha
- deel
- ▁och
- ▁zoek
- plaats
- loo
- ▁mark
- ion
- ▁gezet
- cha
- ▁har
- ▁dienst
- ▁eerlijk
- ▁pijn
- ▁broer
- si
- ooi
- ▁schiet
- burg
- ▁partij
- ▁kar
- ▁frans
- ▁hi
- bro
- ▁veertien
- ▁helpen
- delen
- pi
- ▁min
- ▁slaapzakken
- ▁studie
- ▁zon
- up
- ▁val
- ou
- ▁trekken
- ▁rugzakken
- ▁waarvan
- ▁rug
- ▁relatie
- ▁tekst
- ▁bon
- oon
- ▁zestien
- groep
- bare
- ▁hangen
- ▁tweeduizend
- ▁moe
- ▁inge
- bre
- ▁reis
- ▁tv
- honderd
- ▁beeld
- ▁ineens
- ▁situatie
- ▁programma
- sa
- maken
- ▁past
- ▁kunst
- ▁name
- ▁fa
- ▁cor
- druk
- ien
- ▁leer
- king
- ▁motie
- ▁band
- ▁va
- ▁voorbij
- ▁vlak
- ▁tweehonderd
- ▁seconden
- ▁wa
- chten
- ▁hoezo
- ▁cd
- ▁amerika
- ▁middel
- ▁papa
- vallen
- jo
- ï
- ▁principe
- ▁hetzelfde
- ▁tent
- ▁oranje
- sie
- ▁vrije
- ▁gehaald
- ▁kabinet
- lui
- ▁duizend
- ▁oog
- stel
- ▁klinkt
- ▁drinken
- gegaan
- ▁gauw
- ▁kent
- ▁leek
- ▁nogal
- zo
- ette
- ▁heleboel
- ▁ter
- pan
- ▁zestig
- pp
- ▁sla
- jaar
- ▁hel
- ▁basis
- even
- ud
- son
- ▁kans
- ▁onge
- ▁aandacht
- ▁vreselijk
- ▁haag
- ang
- ▁mei
- ▁beide
- zij
- ▁lees
- plan
- ▁nergens
- gegeven
- ▁vriend
- ▁ten
- gene
- ▁top
- ▁project
- ën
- ▁kla
- j
- ▁stom
- ▁bijzonder
- genomen
- ▁echte
- ▁waarop
- ▁ziek
- ▁verschrikkelijk
- zie
- ▁ni
- par
- laten
- ▁ouder
- ▁tante
- ▁totaal
- and
- ▁car
- ▁gezeten
- vang
- ▁vriendin
- ▁landen
- ▁moeite
- ▁halve
- ▁toekomst
- ▁mezelf
- ▁daardoor
- ▁zus
- ▁noord
- ▁nederlanders
- sluit
- zin
- ▁overigens
- ▁waarbij
- zak
- ▁spel
- ▁eeuw
- ▁zeiden
- gi
- lever
- ▁bas
- mail
- ▁groter
- ▁winter
- ▁utrecht
- ▁sport
- ▁voetbal
- ▁rechts
- ▁maakte
- loop
- war
- ▁prachtig
- min
- ▁fietsen
- ▁luister
- ▁spullen
- vi
- ▁commissie
- und
- ▁luisteren
- bb
- ▁pre
- ▁geluid
- roepen
- ▁rust
- ▁jezelf
- ▁interessant
- ▁stoel
- teken
- elde
- ▁lichaam
- ▁punten
- ▁nieuws
- ▁bedoeling
- ▁papier
- ▁rekening
- ▁engels
- ▁grootste
- lis
- ▁hield
- sprong
- ran
- ▁vallen
- ▁onderwerp
- del
- stellen
- les
- ▁ziekenhuis
- schijn
- ▁paul
- ▁korte
- trui
- ▁duurt
- ▁natuur
- ▁rood
- og
- ▁gewerkt
- ▁jong
- ▁bestaat
- é
- rich
- dy
- rit
- ▁omhoog
- ▁willem
- veld
- leggen
- act
- ▁kennis
- werken
- ▁stuur
- ▁groen
- vis
- ▁witte
- ▁enorme
- ▁behoorlijk
- ▁bu
- ▁tachtig
- ▁apart
- ▁chris
- ▁lijn
- ▁toevallig
- ▁televisie
- ▁gi
- ▁station
- ▁start
- kwam
- ▁liefde
- ▁voel
- ▁hoeven
- ld
- arts
- ▁vervolgens
- ▁meest
- ving
- zel
- wel
- doen
- ▁kiezen
- kaart
- ▁bestuur
- ▁veld
- ▁daarin
- ▁bla
- wet
- vin
- ▁zee
- ▁dochter
- ▁waardoor
- ele
- ▁tevoren
- ▁ton
- kie
- ▁gooi
- loos
- ▁gele
- ▁liet
- ▁voelde
- ▁volgend
- ▁gekeken
- ute
- reden
- ▁belang
- ▁wit
- ▁probeer
- ▁frankrijk
- ▁dikke
- ▁kluivert
- ▁meenemen
- ▁kat
- gemaakt
- ▁zomaar
- ▁duitsland
- ▁ontwikkeling
- voor
- bal
- ▁dom
- ▁raam
- ▁vee
- ▁radio
- ▁koud
- ▁hout
- ▁discussie
- ▁blijf
- ▁bleek
- ▁hoger
- voeg
- ▁gaf
- rus
- ▁bezoek
- ▁vandaan
- ▁beurt
- leven
- ▁dubbel
- komst
- bak
- ▁middag
- ▁kinder
- ouw
- ▁zeventien
- ▁wijze
- ▁schijnt
- ▁wind
- jarige
- band
- ▁bestaan
- ▁boeren
- wijk
- stok
- ▁periode
- ▁niveau
- ▁heerlijk
- ▁vis
- ▁lach
- ▁lastig
- ▁vi
- ief
- ▁dertien
- ▁slaap
- aten
- maat
- ▁verkeerd
- ▁bepaald
- ▁opleiding
- ▁laag
- ach
- bond
- ▁davids
- ▁bovendien
- ▁haast
- kom
- school
- ▁blik
- ▁regering
- ▁hangt
- ▁links
- pa
- ▁brood
- hee
- ▁zoon
- igheid
- ▁genoemd
- ▁duitse
- ▁am
- ▁belasting
- zorg
- punt
- ▁vieren
- gri
- ▁prettig
- ▁vanwege
- eling
- ▁hoi
- ▁opdracht
- ▁hoogte
- straat
- ▁eet
- du
- assen
- ▁zwarte
- ▁gedacht
- age
- hof
- ▁boot
- ▁probeert
- ▁verbinding
- ▁geschreven
- ▁huizen
- ▁vertel
- ▁kwaliteit
- ▁voorstel
- ▁amerikaanse
- ▁vanmorgen
- ▁kro
- ▁dik
- ▁donker
- ement
- ▁wezen
- pje
- ▁wilt
- ▁kleur
- ny
- anne
- gebied
- moe
- pas
- ▁overleg
- ▁komende
- ▁indruk
- ▁voldoende
- ▁einde
- ▁belde
- ▁jezus
- ▁bergkamp
- ▁hoek
- tra
- vaart
- ▁bak
- okken
- ▁hulp
- ▁huwelijk
- ▁publiek
- ▁stoppen
- laag
- ▁jongeren
- ▁pot
- wol
- ▁gebouw
- handel
- ▁overheid
- ▁inter
- ▁park
- gepakt
- ▁gevallen
- ille
- ▁europese
- fe
- cent
- cl
- ▁woon
- ica
- ▁muur
- ▁arm
- ▁slag
- ▁gehouden
- ▁markt
- ▁studenten
- ▁gebracht
- ▁pl
- ▁pieter
- ▁leeftijd
- isme
- stal
- ▁bord
- woord
- middag
- ▁universiteit
- ▁zwart
- ▁netjes
- ▁koe
- ▁qua
- ▁juli
- kop
- ▁hoewel
- ▁afstand
- ▁voren
- ▁vaker
- ▁erbij
- ▁feite
- ▁ei
- ▁betrokken
- ▁sy
- ▁restaurant
- eten
- ▁peter
- ▁wakker
- goed
- ▁si
- ▁buitenland
- ▁san
- naam
- oven
- getrokken
- ong
- ▁betaald
- staatssecretaris
- ▁woonde
- ppen
- ina
- gevoerd
- erde
- ▁zenden
- ▁bri
- ▁vreemd
- fi
- ▁berg
- ▁voelt
- ham
- ▁veranderd
- ▁juf
- ▁bleef
- ▁kap
- ▁burger
- iteit
- ▁mening
- ▁glas
- aars
- ▁arbeid
- ▁bloemen
- ▁milieu
- ▁energie
- ir
- weer
- ▁tegenover
- ▁vliegtuig
- oc
- ▁geboren
- ▁organisatie
- ▁verkeer
- win
- ▁leeg
- ▁ervaring
- ▁haalt
- ▁opgenomen
- ▁handel
- ▁wilden
- vereniging
- ▁straf
- ▁pin
- ▁diep
- ▁vertelde
- mers
- ▁hotel
- beeld
- ▁kracht
- ▁opeens
- ▁boom
- machine
- ▁di
- ▁politiek
- ▁im
- lic
- ▁meester
- ▁cent
- ▁belangrijke
- kte
- ▁verkopen
- groei
- ▁neus
- ▁tweeën
- ▁kregen
- ▁ervan
- ▁plat
- ▁flink
- ▁ijs
- ▁geschiedenis
- ▁bewust
- ▁ramp
- ▁gep
- ▁afspraak
- ▁leef
- ari
- ▁namen
- ▁daarop
- ▁bek
- ▁nam
- ▁minuut
- ▁gij
- tische
- ▁bedrijven
- zing
- ▁dadelijk
- ▁durf
- ▁langzaam
- tisch
- stuk
- toren
- ▁bre
- ▁adem
- lin
- ▁docent
- ▁kerst
- pie
- ▁beleid
- ▁leiden
- draai
- dienst
- ▁erop
- ▁par
- ▁afgesproken
- ▁kwartier
- ▁rechter
- woning
- rover
- hi
- ▁raad
- ▁keertje
- trekken
- ▁beest
- ▁regel
- ▁blauw
- ▁rijk
- ▁borst
- pot
- ▁vooruit
- ▁fantastisch
- ▁steen
- ▁gespeeld
- ▁han
- ▁meisjes
- ▁slot
- ▁cultuur
- ▁voorkomen
- raan
- ▁zingen
- ▁boekje
- ▁literatuur
- aak
- ▁broek
- ▁ontstaan
- ▁thee
- ▁kosten
- uze
- ▁wijn
- ▁mede
- waard
- ▁hoofdstuk
- schrift
- reizen
- schrijven
- ▁bier
- ▁enorm
- ▁letter
- ▁eenmaal
- ▁arnhem
- ▁aha
- ▁ongeluk
- ▁nadat
- ▁europa
- ▁college
- etje
- ard
- ▁opa
- ju
- ▁slim
- ▁geregeld
- ▁zeventig
- oep
- die
- ▁praktijk
- ▁persoon
- ▁vierde
- taal
- ▁daarover
- ▁as
- ▁oost
- lletje
- hou
- ▁simpel
- merk
- gehaald
- ▁bouw
- ▁volgen
- ▁beroep
- ▁komma
- ▁herinner
- ▁juni
- ▁tijdje
- ▁sociale
- ▁koningin
- que
- ▁vele
- ringen
- ▁pr
- ▁verbroken
- ▁vonden
- log
- ▁gewonnen
- ▁bedacht
- ▁zulke
- ▁gaten
- leider
- ▁gegeten
- ▁verhalen
- ▁proef
- ▁bomen
- ▁vet
- ▁bot
- ë
- ▁blijkbaar
- ▁gedeelte
- jan
- ▁vuur
- ▁stop
- ▁tom
- ▁eruit
- ▁dor
- maatschappij
- spect
- pro
- zwa
- ▁lief
- ▁spannend
- ▁winnen
- ▁draaien
- ▁volk
- ▁eindelijk
- gehouden
- staande
- ▁verplicht
- plein
- ▁kok
- ▁rit
- ▁fla
- ▁bru
- ▁plaatsen
- ▁slechte
- ▁vlees
- ▁heette
- ▁gisteravond
- ▁macht
- raf
- ▁effect
- ▁lui
- ▁stage
- ▁vogel
- ▁bruin
- ▁dak
- elle
- ▁stof
- ▁controle
- ana
- werking
- ▁justitie
- ▁harry
- ▁beweging
- ▁haal
- aren
- ▁balbezit
- ▁keuze
- ▁kapot
- stap
- ▁rot
- ▁kamp
- gebracht
- ▁eventueel
- ▁pop
- ▁waard
- ▁makkelijker
- hel
- rvan
- week
- ▁kaartje
- oog
- ▁to
- ▁benieuwd
- ▁succes
- ▁schrijf
- ▁deden
- hand
- ▁omgeving
- ▁gevaarlijk
- beheer
- haven
- ▁boos
- geslagen
- ▁mate
- deren
- berg
- ▁reactie
- ▁fles
- ▁aarde
- ▁gu
- ▁knap
- ▁kleren
- rijden
- out
- ▁bi
- ▁enzovoort
- ▁ogenblik
- ▁personeel
- ▁kende
- ▁aanwezig
- ▁overmars
- ▁middags
- ▁jeugd
- doo
- ▁gratis
- ▁slaat
- ▁voeten
- ▁gesch
- vre
- ▁team
- ium
- ▁zover
- brand
- bbel
- blad
- ▁geluk
- ▁scheelt
- baan
- ▁fo
- ▁grens
- ▁lu
- ▁plannen
- toe
- ▁fe
- ▁voet
- ▁sprake
- è
- ▁voorlopig
- ▁roept
- ▁ruzie
- ▁spoor
- ▁president
- ▁stand
- morgen
- ▁gelopen
- zaam
- ▁type
- ▁kwaad
- ▁geweld
- ▁onbe
- ▁spreek
- ▁degene
- ▁geldt
- ▁ploeg
- ▁verleden
- ▁bureau
- elijkheid
- lust
- ▁club
- ▁inhoud
- ▁dokter
- ▁gewone
- ▁belgië
- ▁koning
- programma
- bel
- ▁pla
- ▁bouwen
- isten
- zelf
- ▁tri
- ▁trok
- ▁stemmen
- ▁meegenomen
- bar
- ▁blok
- ▁gekozen
- ▁vandaar
- ▁ziekte
- ▁voeren
- ▁regelen
- ▁bekijken
- prijs
- vla
- ▁gesteld
- ▁waarde
- wen
- ▁driehonderd
- ▁video
- ▁zware
- lamp
- ▁begrepen
- sprak
- ▁bui
- geschreven
- regeling
- actie
- ▁red
- ▁risico
- ▁volledig
- ▁stam
- ▁samenleving
- ▁finale
- pak
- boo
- lijst
- ▁zowel
- ▁proces
- gu
- ▁verdienen
- ssie
- tiek
- ▁stappen
- ▁verband
- brengen
- ▁spreekt
- ▁motor
- feest
- ▁mam
- ▁stu
- ▁vvd
- ▁tevreden
- ▁geleerd
- ▁passen
- ▁trekt
- ▁vierentwintig
- ▁boe
- ▁vanochtend
- vers
- ▁nood
- ▁engeland
- ▁helder
- ment
- ▁nauwelijks
- terrein
- ▁schuld
- ▁rapport
- ▁ministerie
- ▁schip
- zaal
- ▁ronde
- ▁ervoor
- ▁koken
- ▁verkocht
- bureau
- netje
- kijken
- ▁hans
- ▁groningen
- ▁steun
- ▁negentig
- ertje
- ▁au
- ▁zul
- oei
- ▁gras
- ▁strand
- ▁rook
- ▁verantwoord
- ▁verschillen
- ▁blad
- ▁ware
- ▁mkz
- ▁levens
- bergen
- ▁rare
- ▁sneller
- ▁gewond
- ▁zwemmen
- ▁won
- ▁bericht
- ▁geel
- los
- ▁zonde
- ▁nachts
- ▁schrijft
- ▁gewend
- ▁voelen
- ▁aller
- ▁wim
- scherm
- ▁sturen
- ▁doordat
- ▁getrouwd
- ▁verdedig
- examen
- ▁serieus
- ▁scherp
- ▁vliegen
- ▁tro
- arm
- ▁angst
- ▁groene
- zon
- ▁ku
- ▁reed
- ▁verkoop
- ▁vent
- ▁west
- ▁onzin
- ▁rob
- ▁erin
- ▁erger
- van
- oli
- ▁mogelijkheden
- ▁duits
- apparaat
- ▁kre
- ▁ochtend
- ▁werkelijk
- communicatie
- ▁oplossing
- her
- ▁binnenkort
- geld
- au
- adres
- ▁afdeling
- maal
- gevallen
- ▁meegemaakt
- verantwoordelijk
- ▁rand
- ▁achtergrond
- ▁vervoer
- ▁kwa
- ▁the
- ▁schade
- ▁geb
- ▁helaas
- ▁plezier
- ▁breed
- oppen
- ▁stelling
- ▁scholen
- ▁uiteraard
- ▁geheel
- ▁opmerking
- ▁albert
- ▁plus
- ▁draait
- ▁vijfhonderd
- ▁allen
- loze
- ▁vergelijk
- ▁vlieg
- ▁prachtige
- ▁wagen
- ▁lijst
- ▁kno
- ▁rijdt
- eloos
- middel
- gegooid
- schep
- ▁rode
- ▁gedrag
- ▁centrum
- ▁trek
- lage
- ▁bedrag
- ▁wal
- ▁uurtje
- ▁vlucht
- ▁betaal
- ▁twijfel
- matig
- ▁prijzen
- ▁stukken
- lingen
- igd
- ▁speelde
- ▁directeur
- ▁anderhalf
- ▁boodschappen
- ▁smaak
- ▁cocu
- ▁ernstig
- ▁spul
- ▁uni
- punten
- ▁jeetje
- ▁herken
- ▁klap
- ▁verzamel
- ▁doorheen
- ▁groente
- ▁kru
- lach
- ▁burgemeester
- ▁speciaal
- ▁havo
- ▁artikel
- ▁bedenken
- pagina
- ▁eiland
- ▁werkte
- speler
- ▁ochtends
- aai
- ▁ser
- lle
- ▁veilig
- ▁partijen
- meer
- ▁logisch
- ▁goud
- air
- ▁april
- ▁besloten
- ▁duurder
- ▁strijd
- vrij
- geving
- ▁overleden
- ▁liefst
- afrika
- ▁maan
- ▁dure
- kracht
- gericht
- verandering
- ▁steek
- ▁riep
- ▁steden
- ▁lid
- maker
- kan
- vorm
- ▁bladzijde
- ▁limburg
- ▁wissel
- pjes
- oi
- ▁zorgt
- ▁buik
- aire
- waar
- ▁enkel
- doek
- ▁althans
- ▁combinatie
- ▁functie
- otje
- rmee
- ▁hemel
- ▁pakket
- ▁miljard
- ▁kennelijk
- ▁pen
- zame
- ▁gedachten
- ▁lijken
- ▁merken
- ▁verkeerde
- ▁tas
- ▁veranderen
- ▁premier
- zig
- ov
- ogen
- wand
- ▁betaalt
- ▁prins
- erij
- ras
- ▁enschede
- ▁positie
- ▁plaatje
- ▁baas
- old
- stad
- ▁perfect
- ▁gemeenten
- slagen
- stof
- ow
- ▁besluit
- zaak
- ▁begeleid
- ▁gesprekken
- ▁gezond
- ▁geprobeerd
- ▁september
- ▁reageren
- ▁studeren
- ▁rome
- ▁schat
- ▁thema
- haal
- ▁gooien
- tocht
- ▁cursus
- ▁flinke
- houding
- hy
- ▁gebeurde
- ▁verdien
- ▁voordeel
- ▁homo
- ph
- ▁johan
- ▁aanleiding
- lip
- cho
- ▁ontmoet
- ▁daarnaast
- ▁kies
- raden
- laat
- ▁kruis
- neke
- lette
- ▁kaarten
- ▁cda
- ▁nadenken
- ▁regelmatig
- ▁leest
- ▁soorten
- dam
- ▁toon
- hoek
- vragen
- ▁kor
- ▁volwassen
- rvoor
- ▁debat
- abel
- ▁pakt
- ▁schot
- stem
- ▁wisten
- kleed
- ▁geest
- ▁oktober
- ▁tentamen
- ▁shit
- ▁waarmee
- ▁kees
- bank
- ectie
- ▁museum
- ▁groepen
- ▁comp
- ▁noemde
- ▁harde
- ▁schrik
- vaardig
- ▁landbouw
- ▁gelukt
- pakken
- ▁col
- ▁wandel
- ▁schoenen
- ▁anti
- lap
- ▁product
- ▁bereikt
- ▁tweeëntwintig
- ▁uitgebreid
- ▁leveren
- ▁begrijpen
- schouw
- ▁instantie
- ▁verlies
- ▁gestuurd
- visie
- gelopen
- igen
- uig
- ▁keren
- ▁mor
- bed
- gelegen
- ▁haalde
- ▁pad
- ▁regels
- ▁gemiddeld
- ▁doos
- ▁fre
- mar
- ▁melk
- ▁hing
- ▁ondertussen
- ▁mol
- ▁piet
- ▁jeroen
- ▁positief
- staat
- holland
- ▁boerderij
- ▁winst
- ▁uitleggen
- ▁geraakt
- ▁las
- ▁hol
- ssel
- appen
- ▁gericht
- ▁belangrijkste
- ▁pap
- festival
- ▁bereid
- concert
- ▁sneeuw
- eet
- ▁volle
- ▁meid
- ▁morgens
- ▁dergelijke
- ▁israël
- materiaal
- ▁baby
- centrum
- ▁systeem
- wijze
- gedaan
- ▁overdag
- ▁sno
- ▁huur
- ▁john
- bla
- oef
- ▁wegge
- ▁opnemen
- ▁herinneren
- ▁leuker
- inder
- ▁liedjes
- ▁stelt
- chauffeur
- ▁zesentwintig
- ▁fel
- ▁slaan
- ▁bandje
- hoofd
- gie
- ▁vaste
- daal
- ▁eenentwintig
- ▁kwestie
- ▁bril
- ▁veiligheid
- ▁armen
- vuil
- toets
- ▁wegen
- ▁hobby
- ▁belachelijk
- ▁regio
- ▁bad
- ▁lieve
- ▁eraan
- ▁roze
- zijn
- ▁brede
- neem
- fabriek
- ▁italië
- ▁reclame
- ▁noemt
- ▁gre
- ▁fase
- ▁belt
- ▁kut
- kast
- bellen
- beleid
- ▁stro
- ▁tim
- ▁ondank
- ▁goedkoop
- sloot
- ▁kanten
- ▁liepen
- ▁invloed
- ▁definitie
- ▁gestaan
- ▁rekenen
- ▁nja
- hangen
- ▁lessen
- ▁raakt
- ▁meen
- brug
- ici
- ▁voorzichtig
- ▁robbie
- ▁vrolijk
- ▁zolang
- ▁media
- spraak
- organisatie
- bedrijf
- lla
- ▁bert
- water
- ▁vloer
- ▁klok
- deur
- ▁gebleven
- ▁numan
- ▁trouw
- ▁vanzelf
- borg
- bul
- ▁behoefte
- ▁café
- ▁stroom
- ▁voetballen
- ▁fransen
- stort
- ▁rivier
- ▁file
- ▁warme
- ▁kei
- ▁dichter
- klachten
- print
- fu
- ▁lam
- ▁persoonlijk
- ▁jos
- ▁kri
- ▁akkoord
- ▁aardappel
- ▁kader
- ▁spier
- rand
- ▁ab
- ▁keurig
- ▁tegelijk
- ▁optreden
- ▁mavo
- ▁brengt
- ▁zusje
- ▁zult
- ide
- ▁tu
- ook
- ▁com
- ▁kerel
- ▁nota
- ▁vreemde
- ▁franse
- lei
- ▁serie
- ▁dijk
- ▁jarig
- ▁teken
- tegen
- ali
- ▁plotseling
- ▁rooie
- ▁termijn
- ▁nico
- ▁vierhonderd
- ▁zagen
- ▁artikelen
- rol
- ▁ajax
- ▁glimlach
- ▁provincie
- ▁graden
- ▁zand
- ▁raakte
- ▁draag
- ▁gelegd
- ▁wapen
- ▁vertrouwen
- ▁cas
- ▁mogelijkheid
- spring
- ▁trots
- ▁aardige
- ▁formule
- ▁honger
- ▁graan
- ▁openbaar
- wacht
- ▁fractie
- ▁verloren
- ▁leraar
- ▁schilder
- ▁vergadering
- ▁politieke
- ▁vormen
- ▁tennis
- ▁mochten
- kamp
- rachter
- ▁gast
- ▁zielig
- ▁dia
- bu
- partner
- ieve
- ▁new
- ▁bedoeld
- rna
- ▁dachten
- ▁speel
- ▁januari
- toernooi
- hei
- ▁spa
- link
- stig
- ▁talen
- ▁oudere
- ▁theo
- ö
- ▁droog
- ▁zonne
- ▁japan
- ▁hek
- ▁wandelen
- ▁joegoslavië
- ▁schiphol
- techniek
- ▁aanzien
- ix
- wijd
- ance
- ▁onderweg
- trekt
- ▁algemene
- ▁cadeautje
- ▁treinen
- ▁dol
- ▁vies
- ▁feestje
- ▁gereden
- ▁zakken
- ▁gesloten
- ▁verstand
- ▁hopen
- ▁gedachte
- gesprek
- ▁els
- ▁voorbereid
- ▁probeerde
- tuin
- crisis
- record
- ▁kaas
- ▁doei
- waarde
- ▁doorgaan
- ▁stopt
- ▁enthousiast
- ▁mobiel
- essen
- ator
- gevaar
- ▁schrijver
- ▁vries
- ▁goedkoper
- houder
- aliteit
- ▁bosvelt
- ▁vannacht
- ▁dames
- ▁kern
- ▁achteraf
- istisch
- ▁seizoen
- ▁reiziger
- vermogen
- ▁beurs
- ley
- ▁vingers
- prik
- ▁para
- ▁stichting
- drijven
- ski
- leefd
- reis
- ▁tja
- ▁sfeer
- ▁gemakkelijk
- sector
- ▁vertelt
- ▁lagen
- ▁sluiten
- ▁henk
- gedrukt
- winkel
- fonds
- ▁wetsvoorstel
- ▁vergeet
- ▁ontdekt
- ▁knip
- laan
- graven
- ▁huisje
- vraag
- ▁onderscheid
- club
- dingen
- bli
- ▁geë
- streek
- ü
- ▁augustus
- contract
- ▁kring
- niveau
- ▁conclusie
- ▁schitterend
- ▁specifiek
- ▁geweten
- ▁nogmaals
- ▁verslag
- ▁groeien
- ▁leerling
- ▁tegelijkertijd
- ▁typisch
- ▁normale
- ▁begrip
- ▁huidige
- ▁oudste
- methode
- gesproken
- ▁allang
- cus
- ▁advies
- gevuld
- ▁geloven
- ▁ophalen
- ▁flauw
- wagen
- ▁mal
- sturen
- ▁taak
- ▁zacht
- regel
- ▁verandert
- ▁joost
- ▁schilderij
- ▁kranten
- ▁nader
- bui
- ▁moeilijke
- ▁eindhoven
- achtige
- ▁gaaf
- ▁piep
- koek
- ▁roken
- ▁cel
- ▁tik
- ▁vijfendertig
- ▁verdacht
- ▁vermoed
- ▁saai
- ▁verwachten
- ▁pol
- ▁onderdeel
- ▁erts
- ▁heren
- ▁ingewikkeld
- ▁zwanger
- ▁spanning
- ▁sloeg
- ▁lager
- istische
- ▁blauwe
- ▁camping
- ▁opname
- ▁speciale
- gebouw
- ▁lagere
- ▁paard
- ▁drieëntwintig
- ▁juffrouw
- genoten
- toestand
- ▁gevolg
- ▁hoefde
- ▁sporten
- bur
- storm
- ▁rijks
- dom
- '2'
- ▁officieel
- ▁afspraken
- ▁parijs
- ▁oeh
- stoffen
- partij
- lied
- markt
- truc
- bor
- ramp
- ▁vrede
- ▁ongelooflijk
- ▁bedankt
- ▁spanje
- ▁toneel
- ▁kleuren
- ▁bieden
- ▁herman
- ▁leo
- ▁hersen
- ▁voorlezen
- maakt
- ▁tand
- ▁franc
- ▁strak
- ▁zuur
- ▁verwe
- klas
- ▁ophangen
- ji
- ▁afscheid
- streep
- line
- ▁konijn
- ▁drieën
- ▁martin
- ▁drukke
- ▁bron
- ▁wc
- ▁bereiken
- ▁dagelijks
- ▁noorden
- orde
- ▁nek
- ▁vinger
- ▁maatschappelijk
- ▁behandeling
- schappen
- ▁gepraat
- ▁tour
- ▁eenvoudig
- ▁puur
- vakantie
- ▁neu
- ▁steken
- bus
- pte
- ▁lul
- ▁gestopt
- ▁douche
- ▁kantoor
- ▁wonder
- ▁engelse
- ▁spaans
- ffer
- instituut
- ▁centraal
- ▁helpt
- ▁letten
- student
- ▁koor
- ▁trouwen
- ▁moeilijker
- ▁donkere
- ▁parlement
- unie
- dreven
- onderzoek
- ▁loon
- gewo
- ▁kip
- ▁veroorzaakt
- tunnel
- ▁model
- ▁dwars
- ▁streng
- ▁soep
- huizen
- schot
- houdt
- trap
- lia
- ▁kampioen
- wind
- ▁luid
- ▁marc
- ▁invullen
- theater
- ▁bakker
- uch
- pomp
- gewerkt
- blind
- lucht
- ▁milosevic
- alexander
- ▁waarschuw
- ▁verenigd
- ▁koeien
- ▁raken
- ▁pri
- ▁dingetjes
- erig
- ▁kust
- 'off'
- ara
- ▁beelden
- anse
- ▁grotere
- gevoel
- ▁gedichten
- ▁maastricht
- ▁varkens
- ▁tussendoor
- ▁massa
- ▁hup
- reed
- ▁kilo
- ▁vertrouw
- ▁vriendje
- ▁vervolg
- ▁interview
- ▁katholiek
- ▁theorie
- ▁sterren
- ▁koos
- ▁aanval
- ▁schapen
- ▁piek
- ob
- ▁uitstekend
- duc
- rtussen
- ▁bekende
- ▁max
- ▁degelijk
- ▁actief
- ▁buren
- tuele
- ▁stilte
- structuur
- ▁kritiek
- ▁bijbel
- ▁sar
- ▁staten
- akker
- ▁lelijk
- ▁pal
- ▁boel
- ▁totdat
- ▁slingers
- ▁huishoud
- tuigen
- ▁leger
- ▁vijfenveertig
- genoot
- geschoten
- dagen
- ▁groepje
- kunde
- ▁gekke
- gids
- ▁gouden
- ▁klant
- ▁alsjeblieft
- ▁zanger
- tafel
- ▁heilig
- boven
- ▁duren
- ▁pim
- station
- cijfer
- ▁december
- ▁ideaal
- auw
- ▁maatregelen
- ▁cijfers
- ▁gezondheid
- nacht
- ▁gedoe
- akel
- ▁appel
- ▁ramen
- ▁stenen
- ▁broertje
- letjes
- gestuurd
- ▁rugzak
- ▁kaartjes
- ▁consument
- ▁ronald
- ▁tientallen
- ▁taxi
- ▁ouderwets
- ▁brussel
- ▁vuurwerk
- ▁oren
- ▁controleren
- ▁kou
- ▁klanten
- ▁bosch
- ▁onver
- ▁meedoen
- ▁vakken
- ▁producten
- schrijf
- ▁agenda
- vorming
- ▁huilen
- ▁brandweer
- ▁ideeën
- ▁onzeker
- ▁doden
- ▁sint
- ▁fries
- omstandigheden
- ▁financiële
- ▁negenennegentig
- ▁onmiddellijk
- ▁praktisch
- ▁bie
- ▁kippen
- ▁ermee
- slaan
- virus
- ▁waa
- rang
- ▁hai
- ▁verstandig
- ▁beslissing
- ▁bepalen
- ▁training
- ▁volgt
- ▁erik
- norm
- ▁sander
- ▁stapel
- ▁afhankelijk
- ▁tenslotte
- ▁verbaasd
- ▁vijfenzeventig
- verklaring
- ▁floor
- ▁droom
- gebruik
- ▁italiaanse
- film
- broek
- ▁bijzondere
- ▁drink
- ▁boodschap
- ▁centimeter
- ▁internationale
- ▁friesland
- ▁belangstelling
- stoel
- ▁tru
- ini
- gerecht
- rooster
- ▁vecht
- ▁char
- ▁protest
- ▁interesse
- ▁klonk
- ▁straten
- ▁pauze
- ▁vul
- ▁duurde
- ▁omgaan
- ▁harder
- ▁sleutel
- ▁aantrekkelijk
- ▁economie
- bescherming
- ▁zucht
- ▁standaard
- ▁gebouwd
- zoek
- ▁achtentwintig
- ▁tweedehands
- ▁vocht
- ▁rechterkant
- ▁voetballer
- gerekend
- fractie
- ▁maag
- kosten
- ▁meerdere
- ▁roman
- ▁schoot
- ▁michael
- ▁kleding
- ▁gelegenheid
- preek
- ▁ruud
- ▁bracht
- ▁haak
- ruimte
- ▁bekijk
- gedragen
- ▁planten
- ▁brinkhorst
- ▁getroffen
- ▁tilburg
- ▁automatisch
- verkeer
- ▁schaap
- ▁keel
- ▁werkelijkheid
- ▁kas
- ▁moderne
- ▁productie
- ▁uitgenodigd
- '66'
- ▁golf
- ▁brits
- ▁geheim
- ▁overheen
- ▁besef
- ▁huid
- ▁boog
- ▁levert
- ▁belg
- ▁amendement
- ▁gigantisch
- ▁houten
- gedraaid
- ▁schi
- ▁nij
- ▁gasten
- plaat
- presentatie
- ▁spijt
- ▁linkerkant
- ▁simon
- ▁elftal
- ▁vaag
- ▁lot
- wedstrijd
- ▁grapje
- directie
- ▁term
- ochtend
- ▁schaal
- ▁irritant
- ▁turkije
- begroting
- ▁garage
- ▁hypo
- ▁argumenten
- ▁klik
- ▁medisch
- ▁luc
- patiënt
- instrument
- bewoners
- verzekering
- ▁oostenrijk
- iviteit
- ontwerp
- ▁losse
- ▁schouders
- ich
- ▁voortdurend
- ▁überhaupt
- ▁zogenaamd
- ▁rommel
- ▁element
- ▁patiënten
- ▁slachtoffers
- ▁heijn
- ▁begreep
- ▁zwak
- ▁bezoekers
- ueel
- blijven
- ▁tekenen
- ▁joris
- ▁slaapkamer
- ▁seks
- ▁stomme
- ▁draaide
- ▁zuster
- ▁paarden
- ▁riet
- ▁bom
- ▁uitgaan
- ▁tekort
- ▁hoogste
- grond
- ▁koninginnedag
- ▁microfoon
- ▁november
- ▁beroemd
- dracht
- ▁aparte
- ▁verzet
- beek
- systeem
- ▁moord
- ▁bevolking
- ▁stevig
- ▁verkoper
- ▁muis
- ▁duitsers
- ▁verhouding
- ▁gemeenschap
- ▁balie
- duik
- blond
- ▁snelle
- ▁beloofd
- ▁bioscoop
- ▁relaxed
- ▁middelbare
- ▁klim
- ▁cool
- ▁opzichte
- vrouw
- snel
- ▁jongetje
- positie
- ▁knie
- ▁vrijheid
- ▁constant
- ▁plastic
- ▁verdwenen
- ▁zevenentwintig
- ▁opzoeken
- ▁smal
- ▁peper
- knop
- ▁opzet
- honderdduizend
- bord
- ▁argument
- industrie
- ▁olympisch
- ▁vlakbij
- ▁vrijwillig
- ▁oefenen
- ▁suiker
- ▁herfst
- ▁dragen
- ▁zelfstandig
- ▁dirk
- ▁tellen
- ▁inzet
- ▁mooiste
- ▁yo
- dijk
- saus
- kruid
- ▁scheidsrechter
- ▁voornamelijk
- ▁verzinnen
- ▁eisen
- ▁kopje
- proces
- wereld
- ▁gordijn
- ▁vlug
- ▁beperkt
- ▁tof
- ▁mailtje
- ▁bezit
- ▁schuur
- ▁leidt
- worp
- cellen
- ▁beantwoord
- ▁langzamerhand
- ▁overtreding
- ▁ontvangen
- ▁personen
- ▁zilver
- ▁vrachtwagen
- ▁verstaan
- ▁prijsniveau
- ▁respect
- ▁springen
- ▁gedronken
- ▁opening
- geeft
- ▁gevolgen
- ▁dru
- rum
- meester
- architect
- ▁bespreken
- ▁negatief
- ▁droeg
- ▁kpn
- ▁score
- ollen
- ▁dennis
- ▁mannetje
- ▁race
- ▁david
- ovic
- ▁gevolgd
- ▁boete
- ▁landelijk
- stroom
- '1'
- ▁halverwege
- ▁philip
- ▁culture
- ▁genieten
- ▁sterker
- ▁knikt
- sluiting
- ▁redden
- macht
- ▁fruit
- ▁lijstje
- rijke
- ami
- lezen
- ▁kletsen
- ▁ontwikkelen
- ▁salaris
- ▁dichtbij
- ▁tegenstander
- ▁behandeld
- schuif
- ▁geplaatst
- ▁pil
- ▁touw
- ▁besteden
- ▁netwerk
- ▁vertrekken
- oosten
- ▁sam
- ▁verlaten
- ▁parkeer
- ▁huren
- dronken
- schreeuw
- zitten
- kuil
- ▁geïnteresseerd
- diploma
- ▁huisartsen
- ▁bekeken
- ▁economische
- auto
- spelletje
- ▁persoonlijke
- ▁pakte
- ▁olie
- vier
- ▁liedje
- ▁alcohol
- ▁klauwzeer
- ▁lukken
- ▁oppassen
- ▁rusland
- ▁schudde
- ▁alex
- ▁kook
- kind
- tempel
- ▁uitspraak
- ▁patrick
- ▁verdieping
- ▁eu
- ▁leerkracht
- ▁drank
- ▁samenwerking
- ▁bolle
- ▁buitenlandse
- ept
- ▁onmogelijk
- ▁glo
- ▁professor
- ▁populair
- rry
- ▁lullig
- ▁lijf
- ▁bush
- ▁neergezet
- ▁lol
- ▁artsen
- ▁voorzien
- onderwijs
- ▁rijst
- operatie
- ▁huiswerk
- ▁briefje
- ▁legde
- sport
- ▁favoriet
- ▁heuvel
- ▁figuur
- ▁christus
- ▁mont
- ▁verspreid
- ius
- ▁marcel
- ▁indien
- ▁verwachting
- ▁verzoek
- ▁medicijnen
- ▁versie
- ▁modern
- ▁missen
- ▁fris
- ▁waarvoor
- ▁gesp
- ▁achtennegentig
- ▁asielzoekers
- ▁australië
- ▁geholpen
- ▁recept
- ▁ellende
- ▁zwolle
- ▁brievenbus
- ▁diezelfde
- ▁belgische
- volle
- ▁militaire
- persoon
- vriend
- ▁achterin
- kleur
- ▁buitengewoon
- ▁februari
- ▁georganiseerd
- ▁sociaal
- ▁uitdrukking
- ▁onduidelijk
- ▁york
- ologie
- vlek
- ▁besteed
- ▁heus
- nummer
- ▁leem
- muziek
- ▁klus
- ▁zeshonderd
- ▁plekken
- ▁interessante
- low
- ▁bah
- ▁sarah
- ▁bemoei
- ▁linker
- ▁bewegen
- ▁gedicht
- ▁bijdrage
- ▁onbekend
- ▁vissen
- ▁snelweg
- commissie
- viel
- ▁tienduizend
- ▁bod
- ▁aanbieding
- ▁feyenoord
- ▁karakter
- campagne
- therapie
- ▁schreef
- ▁londen
- ▁rechtstreeks
- ▁zwitserland
- ▁bedenk
- post
- ▁achteruit
- ▁vos
- ▁zwem
- subsidie
- ▁afspreken
- ▁vijfenzestig
- ▁wereldoorlog
- ▁robert
- ▁inleveren
- ▁voorkeur
- ▁betrekking
- ▁dick
- ▁onderhand
- ▁tong
- ▁russische
- ▁verrassing
- ▁zwembad
- ▁boeiend
- ▁kunstenaar
- ▁vervangen
- ▁griekenland
- kort
- ▁verkiezingen
- ▁afgesloten
- ▁broodje
- ▁tip
- ▁bram
- hard
- blijf
- ▁klassen
- ▁terugkomen
- ▁andersom
- ▁cadeau
- maatregel
- ▁gevangen
- ▁ambtenaren
- bijeenkomst
- ▁christelijke
- ▁diverse
- shirt
- ▁tempo
- ▁brieven
- ▁vlag
- ▁omheen
- ▁grap
- ▁vriendelijk
- ▁vriendinnen
- ▁vertrek
- ▁liter
- hoven
- vulling
- ▁dode
- ▁snelheid
- ▁koel
- ▁verliefd
- ▁beslist
- ▁geboorte
- management
- ▁drugs
- ▁munt
- ▁basisschool
- ▁zolder
- ▁maria
- neming
- ▁paas
- ▁letterlijk
- poot
- rekening
- greep
- ▁individu
- ▁europees
- ▁buurman
- ▁generatie
- ▁geschikt
- ▁rellen
- ▁oordeel
- ▁verleng
- poli
- ▁hekel
- ▁opvallend
- ▁katten
- ▁app
- ▁groeit
- ▁bil
- ▁concurrentie
- ▁sigaret
- ▁verdriet
- ▁pannen
- ▁vierkante
- ▁maas
- ▁melden
- trein
- ▁vwo
- ▁banen
- ▁pub
- ▁lisa
- ▁eigenaar
- polder
- ▁enigszins
- ▁ongetwijfeld
- ▁ontwikkeld
- show
- ▁zachtjes
- ▁poging
- manager
- ▁marij
- ▁spelers
- ▁pest
- ▁buurvrouw
- ▁fortuyn
- ▁organiseren
- ▁resultaat
- ▁toestemming
- ▁aangeboden
- ▁zodanig
- ▁schaduw
- ▁uitzoeken
- ▁pronk
- ▁poeh
- ▁immers
- ▁kroeg
- ▁kleintje
- ▁nationale
- ▁bob
- kundige
- ▁schouder
- wiel
- ▁tekening
- service
- ▁voedsel
- ▁overtuigd
- ▁ziekenhuizen
- ▁telkens
- complex
- ▁minstens
- ▁drama
- oefening
- ▁verboden
- ▁lesgeven
- ▁triest
- ▁korting
- ▁verhuizen
- ▁mini
- ▁zachte
- project
- conferentie
- ▁rozemarijn
- ▁scriptie
- wetenschappelijk
- ▁gezocht
- ▁oorzaak
- blaadje
- ▁jurk
- ▁morgenavond
- richting
- schild
- procedure
- ▁sinterklaas
- ▁pvda
- ▁daarachter
- ▁uitvoering
- ▁koffer
- ▁troep
- ▁lente
- ▁minst
- ▁brak
- ▁afloop
- ▁uitzend
- ▁puntje
- ▁herstel
- ▁positieve
- ▁geslapen
- ▁nagedacht
- ▁nieuwsgierig
- ▁onafhankelijk
- ▁gunstig
- ▁neerzetten
- wekkend
- ▁klimaat
- ▁landschap
- ▁waal
- ▁warmte
- ax
- raakt
- technisch
- ▁vijfennegentig
- assistent
- ▁leeuw
- ▁bezwaar
- ▁normen
- ▁ontstaat
- ▁stadion
- trok
- ▁achterkant
- ▁appartement
- ▁internationaal
- ▁aanwijzing
- ▁paniek
- fiets
- gekregen
- ▁bagage
- ▁daarheen
- ▁wennen
- ▁onthouden
- verbod
- oloog
- ▁compleet
- ▁juridisch
- daagse
- ▁biologisch
- ▁lijden
- verdi
- ▁trouwdag
- ▁geïn
- overzicht
- ▁lippen
- ▁kabel
- ▁india
- ziekte
- ▁dorpje
- ▁achthonderd
- ▁roos
- isering
- ▁besproken
- ▁onwijs
- ▁betekenis
- ▁bedrijfsleven
- ▁alvast
- ▁stoep
- ▁marijke
- gezegd
- ▁bou
- ▁doelpunt
- ▁advocaat
- ▁gezamenlijk
- ▁ontbijt
- ▁afwachten
- ▁zozeer
- ▁sterven
- ▁gemak
- immer
- ▁vrijdagavond
- vlees
- ▁merkte
- ▁tijdschrift
- ▁anna
- ▁site
- ▁wint
- ▁jongste
- ▁alhoewel
- ▁opvoeding
- ▁uitgelegd
- oppervlak
- ▁lunch
- ▁duiven
- ▁destijds
- ▁keihard
- ▁psv
- ▁aanbod
- ▁verdiend
- gebeld
- ▁spaar
- hulp
- rnaast
- ▁activiteiten
- ▁oorspronkelijk
- ▁gebroken
- ▁disco
- stoot
- ▁haarlem
- ▁gevangenis
- ▁schoonmaken
- ▁nuttig
- worst
- ▁wang
- ▁toestel
- xxx
- wijn
- ▁schor
- ▁gevoelig
- formulier
- transport
- ▁vermoord
- ▁inzicht
- ▁herhaal
- ▁terras
- luisteren
- gekeken
- ▁dingetje
- deskundige
- stelsel
- ologisch
- ▁corpus
- gewassen
- ▁niels
- ▁balkon
- ▁bio
- ▁vergis
- ▁hecht
- ▁eieren
- ▁beatrix
- ▁lichamelijk
- ▁koninklijk
- ▁spijker
- ▁ingediend
- ▁verbonden
- ▁woede
- ▁koppel
- ▁staking
- ▁tank
- ▁rondom
- west
- ▁wouter
- ▁vergelijking
- ▁pizza
- ▁extreem
- ▁friettent
- schuiven
- ▁schelen
- gesloten
- ▁visser
- ▁geestelijk
- ▁adoptie
- ▁volkskrant
- ▁vertraging
- ▁kikker
- ▁trainen
- ▁gerard
- ▁lood
- storing
- ▁vullen
- bloei
- zetting
- lepel
- verdrag
- iswaar
- ▁ib
- ▁tweeëndertig
- ▁zwijg
- ▁lawaai
- ▁tegemoet
- ▁toilet
- vestiging
- ▁zweden
- ▁interesseert
- ▁kassa
- ▁raak
- ▁verdeeld
- ▁medewerkers
- ▁gepland
- ▁amerikaan
- duid
- ▁initiatief
- ▁bronckhorst
- ▁consequent
- ▁daarnaartoe
- ▁agenten
- ▁klassieke
- ▁uiterlijk
- ▁voorbereiding
- romheen
- periode
- problemen
- dorp
- geschakeld
- pannenkoekenhuis
- ▁expres
- ▁financieel
- ▁judith
- ▁pensioen
- ▁overeind
- gespannen
- ▁detail
- ▁kloppen
- ▁kuip
- ▁vlot
- ▁zevenhonderd
- president
- vlucht
- '3'
- à
- ▁explosie
- ▁negenentwintig
- ▁veroordeeld
- ▁vierennegentig
- ▁reserve
- ▁marieke
- ▁schijf
- ▁fnv
- ▁china
- ▁vliegveld
- ▁buitenspel
- doelstelling
- ▁jacob
- ▁bocht
- gespeeld
- vlieg
- verschil
- plannen
- ▁budget
- ▁uitzondering
- apparatuur
- persoonstenten
- ▁sophie
- ▁fleur
- ▁accent
- ▁uitspraken
- spreker
- ▁grijp
- vlogen
- ▁geopend
- ▁badkamer
- ▁telefoonnummer
- ▁turkse
- ▁binnenlands
- ▁supermarkt
- moedig
- beweging
- wisseling
- vergunning
- ▁gestudeerd
- congres
- ▁lokale
- ▁ervaren
- ▁onrust
- ▁mouwen
- ▁commun
- ▁gemerkt
- ▁correct
- ▁hoofdstad
- stroming
- ▁onderhoud
- ▁standpunt
- ▁privé
- '0'
- ▁joegoslaven
- ▁thomas
- ▁durven
- ▁friet
- ▁eeuwig
- ▁schreeuwen
- ▁functi
- ▁opzicht
- situatie
- praten
- gooien
- ▁beïnvloed
- ▁experiment
- ▁beschikbaar
- ▁opvatting
- ▁noodzakelijk
- prestatie
- ▁onlangs
- ▁trainer
- ▁vieze
- ▁gemist
- ▁emmer
- strijd
- ▁nest
- ▁wereldkampioen
- slachtoffer
- ▁horizon
- ▁máxima
- ▁voeding
- ▁vertrokken
- ▁dollar
- ▁opgelost
- ▁chaos
- orkest
- ▁gerust
- ▁bodem
- ▁uitkering
- ▁staart
- ▁dict
- gestaan
- ▁hooi
- ▁bezorgd
- ▁breng
- ▁chocola
- ▁onderzocht
- ▁plaatsvinden
- ▁smerig
- gesneden
- ▁westerveld
- ▁toeristen
- ▁minimaal
- ▁grijs
- plakken
- ▁taart
- ▁tenzij
- ▁blank
- ▁droge
- ▁attractie
- ▁joodse
- ▁verdwijnen
- ▁medaille
- ▁toezicht
- ▁borrel
- ▁zojuist
- ▁knieën
- ▁deventer
- ▁verlangen
- ▁long
- ▁genua
- ▁soldaten
- ▁elektro
- ▁kleuter
- ▁overwinning
- ▁bijlmer
- ▁meerderheid
- ▁rennen
- ▁spelletjes
- ▁gevoelens
- ▁keus
- will
- ▁prinses
- ▁vijfenvijftig
- ▁allochtone
- ▁netelenbos
- ▁postkantoor
- democra
- ▁lengte
- ▁werkgevers
- ▁arnold
- ▁geslaagd
- ▁groenlinks
- schoenen
- bestrijding
- ▁piano
- ▁aannemen
- ▁bewaren
- ▁verloor
- ▁tandarts
- ▁hevig
- ▁hiermee
- ▁getuige
- ▁volgde
- vatten
- ▁stier
- ▁neven
- ▁gebrek
- ▁slaapzak
- vleugel
- ▁drieëndertig
- ▁vluchtelingen
- ▁religie
- heffing
- ▁leraren
- ▁biedt
- ▁strip
- ▁bevalt
- ▁breda
- ▁bill
- type
- voet
- kundig
- document
- gedwongen
- ▁negatieve
- ▁ongelofelijk
- ▁tjonge
- ▁indonesië
- ▁rotzooi
- ▁vlinder
- ▁ruilen
- ▁koelkast
- ▁sliep
- ▁jazeker
- amerika
- ▁kelder
- voorzitter
- ▁herkennen
- ▁graaf
- grepen
- ▁seconde
- ▁ambitie
- ▁eenendertig
- ▁geregistreerd
- ▁intelligent
- ▁paspoort
- '4'
- ▁andré
- ▁dugarry
- ▁etalage
- ▁officiële
- ▁arbeidsmarkt
- ▁heftig
- ▁intussen
- ▁gezeur
- ▁uitvoeren
- ▁exact
- ▁neder
- ▁sms
- ▁pech
- geluid
- voorstel
- ▁omroep
- ▁barcelona
- ▁caravan
- ▁moslim
- ▁ontdekken
- ▁website
- ▁wiskunde
- ▁neiging
- ▁carnaval
- ▁dichterbij
- ▁uitzicht
- ▁gezag
- ▁kussen
- meubel
- ▁melkert
- ▁vrees
- salade
- ▁plafond
- groot
- ▁athene
- ▁inwoners
- ▁neergelegd
- ▁passagiers
- ▁pretpark
- ▁anwb
- ▁bommel
- ▁schuin
- ▁zichtbaar
- ▁morgenochtend
- ▁bruid
- ▁afwas
- ▁turk
- ▁nep
- ▁continu
- ▁denemarken
- ▁dialect
- ▁regeerakkoord
- ▁treffer
- ▁maaike
- ▁vloei
- ▁verliezen
- ▁biologie
- ▁stress
- ▁gym
- ▁josé
- ▁technische
- ▁rechtbank
- minister
- vergadering
- ▁schoen
- spoed
- frequentie
- ▁achtendertig
- ▁apeldoorn
- ▁emotie
- ▁onvoldoende
- ▁vierendertig
- ▁vriezen
- ▁stijgen
- ▁tsjech
- ▁incident
- ▁psycho
- ▁luxe
- ▁paleis
- ▁benzine
- ▁berlijn
- ▁beschermen
- ▁individuele
- ▁strik
- ▁lift
- keuring
- erwijs
- nederland
- ▁rente
- run
- ▁behang
- verkiezing
- ▁traditione
- commissaris
- tribunaal
- ▁evaluatie
- ▁zorgvuldig
- generaal
- verhoging
- ▁natuurkunde
- ▁klote
- ▁joep
- klaar
- plek
- scheiding
- ▁oproep
- constructie
- ▁ballonnen
- ▁negenendertig
- ▁vertegenwoordig
- ▁studio
- officier
- winnaar
- ▁gestoken
- ▁zelden
- ▁brugge
- waaien
- ploeg
- pakket
- ▁bezet
- horst
- liggend
- ▁verpleeg
- ▁administratie
- ▁afschuwelijk
- ▁annemarie
- ▁belooft
- ▁creëren
- ▁functioneren
- ▁gescheiden
- ▁mihajlovic
- ▁rijkaard
- ▁ijssel
- ▁uitgangspunt
- product
- relatie
- jarig
- einde
- meldt
- ▁kritisch
- stromen
- ▁vanzelfsprekend
- ▁beslissen
- ▁boterham
- ▁karembeu
- ▁ontstond
- ▁marokkaan
- ▁poosje
- ▁ketting
- ▁konterman
- ▁toelichting
- ▁richard
- ▁factor
- ▁dunne
- ▁linda
- ▁diederik
- ▁rolstoel
- wijziging
- ▁slinger
- ▁slee
- ▁chic
- cursus
- geschiedenis
- fase
- pijl
- ▁circ
- ▁snij
- ▁etappe
- ▁identiteit
- ▁palestijns
- ▁spontaan
- ▁vergeleken
- ▁zesendertig
- ▁fundament
- ▁brugklas
- ▁zelfmoord
- ▁handtekening
- ▁oplossen
- systemen
- ▁martijn
- ▁behandelen
- ▁stoer
- snijden
- gebonden
- ▁spee
- ▁glad
- ▁mentor
- geacht
- pillen
- ▁margriet
- ▁schmeichel
- ▁mislukt
- ▁bidden
- ▁betoog
- ▁stemming
- ▁wolken
- ▁maximaal
- ▁protestant
- ▁dominee
- praat
- ongeluk
- kunst
- ▁italiaan
- blessure
- ▁accepteren
- ▁achtenveertig
- ▁agressie
- ▁bibliotheek
- ▁hendrik
- ▁stiekem
- ▁zesennegentig
- sponsor
- ▁woordvoerder
- ▁twente
- ▁merkwaardig
- ▁gezelschap
- ▁laurens
- ▁pluk
- ▁teveel
- ▁naderhand
- kamerlid
- voorziening
- ▁gecontroleerd
- ▁hockey
- ▁nadrukkelijk
- ▁ontspannen
- ▁toepassing
- ▁leeuwarden
- ▁gebleken
- ▁somber
- ▁pleeg
- ▁dreigen
- ▁maaltijd
- ▁groeten
- ▁folder
- ▁status
- ▁speelgoed
- gehakt
- instellingen
- chines
- museum
- ▁boudewijn
- ▁gestolen
- ▁makaay
- ▁sollicitatie
- ▁podium
- ▁defensie
- ▁jolanda
- ▁priester
- ▁volstrekt
- ▁aangezien
- ▁hierbij
- ▁verstuur
- ▁delft
- ▁onverwacht
- ▁geboekt
- ▁stink
- betaling
- ▁zonnig
- stamp
- gevraagd
- bespreking
- ▁batterij
- ▁begrafenis
- ▁realiseren
- ▁spanjaard
- ▁zevenennegentig
- ▁investeren
- ▁luuk
- ▁multi
- investering
- ▁gefietst
- ▁ambt
- ▁randstad
- ▁straal
- ▁envelop
- ▁annemiek
- ▁gereageerd
- ▁liesbeth
- ▁tweeënveertig
- journaal
- ▁vervanging
- ▁vijand
- ▁eén
- tarief
- ▁irak
- ▁laura
- ▁snoep
- politie
- ▁leun
- roverheen
- gedrag
- gebleven
- probleem
- vroeg
- plas
- geteld
- ▁boxtel
- ▁cetera
- ▁chinees
- ▁concentratie
- ▁evelien
- ▁musical
- ▁slordig
- ▁illegale
- ▁sofie
- ▁ruikt
- ▁edwin
- ▁hoeverre
- ▁verricht
- brief
- ▁afgestudeerd
- ▁gehandicapt
- ▁verbetering
- competitie
- ▁brommer
- ▁sparta
- ▁aanvaard
- ▁overnemen
- ▁herrie
- ▁rené
- ▁verkering
- ▁vertaling
- ▁schrok
- ▁daniël
- ▁troost
- rapport
- ontwikkeling
- ▁grieken
- misbruik
- ondersteuning
- ▁aanbieden
- ▁amersfoort
- ▁bruiloft
- ▁gestorven
- ▁kachel
- ▁rijbewijs
- ▁teleurgesteld
- ▁woestijn
- collectie
- ▁spoorwegen
- ▁godsdienst
- ▁esther
- onderhandelingen
- ▁betekenen
- ▁takken
- ▁uitzendbureau
- ▁voorwaarden
- ▁overweging
- breuk
- astenverlichting
- indeling
- belasting
- papier
- materialen
- ▁kosovo
- ▁olifant
- ▁plattegrond
- ▁samenvatting
- ▁verwarming
- ▁interieur
- ▁michiel
- traject
- ▁herhalen
- ▁onderwijzer
- ▁gitaar
- ▁tamelijk
- ▁uitzien
- ▁verhuisd
- vruchten
- prijzen
- otto
- eeën
- ▁uitnodiging
- ▁barbecue
- ▁depressie
- ▁egypte
- ▁fysiek
- ▁gebaseerd
- ▁geliefde
- ▁heerenveen
- ▁idioot
- ▁nationaal
- ▁software
- ▁suggestie
- ▁traditie
- ▁verbeteren
- ▁pardon
- ▁remco
- ▁weiland
- ▁getekend
- ▁verdwijnt
- ladder
- ▁besmet
- brood
- economisch
- ▁vracht
- '8'
- puzzel
- ▁beschreven
- ▁commentaar
- ▁financiën
- ▁verklaren
- ▁zenuwachtig
- ▁golven
- ▁bewonder
- ▁onhandig
- ▁bezocht
- ▁gelderland
- dossier
- ▁arena
- verdeling
- gekleurd
- geluisterd
- ▁gepleegd
- ▁hormoon
- ▁uitnodigen
- comité
- ▁druppel
- ▁geschrokken
- ▁overeenkomst
- ▁gezondheidszorg
- ▁evenwicht
- ▁verstopt
- ▁volendam
- geblazen
- gelovig
- ▁verlegen
- ▁kalm
- overlast
- voudig
- ▁ambulance
- ▁constateren
- ▁drempel
- ▁euthanasie
- ▁grønkjaer
- ▁mirjam
- ▁paragraaf
- ▁tweeënnegentig
- ▁vleermuizen
- ▁gezongen
- ▁dreigt
- werkzaamheden
- schikking
- ▁bewezen
- ▁herhaling
- poetsen
- ▁zuiver
- ▁uitdrukken
- ▁gevierd
- ▁geboden
- ▁gestemd
- ▁gestegen
- begeleider
- ▁duif
- ▁gevecht
- grafie
- ▁hielp
- scholen
- leerling
- historische
- '&'
- object
- ▁beschuldig
- ▁daadwerkelijk
- ▁fulltime
- ▁lekkage
- ▁vitesse
- seksuele
- ▁geduurd
- ▁omgekeerd
- variant
- ▁gezorgd
- ▁albane
- ambtenaar
- ▁betekende
- ▁tineke
- ▁gekost
- ▁michel
- ▁aangekondigd
- seizoen
- getreden
- ▁opstaan
- werf
- ▁snee
- medewerker
- ▁demonstratie
- ▁finish
- ▁journalisten
- ▁mazzel
- ▁opgevoed
- ▁zevenendertig
- ▁canada
- ▁vloog
- ▁verwoest
- ▁gesprekspartner
- ▁parallel
- aandeelhouder
- ▁gekookt
- ▁aanpassen
- rnaartoe
- strook
- ▁camp
- ▁akelig
- verlichting
- ▁gehoor
- signaal
- ▁advocaten
- ▁alternatief
- ▁antwerpen
- ▁chocolademelk
- ▁commerciële
- ▁liesbos
- ▁secretaresse
- ▁triviant
- ▁vermoeiend
- ▁vitrage
- ▁vijfentachtig
- ▁beoordelen
- strepen
- ▁mannelijke
- ▁opgebouwd
- afdeling
- ▁zwitser
- ▁ideale
- ▁aanvankelijk
- ▁combineren
- ▁drieënveertig
- ▁geïnformeerd
- ▁indonesisch
- ▁ontdekking
- ▁pruimen
- ▁signale
- ▁voicemail
- monument
- ▁intensief
- ▁bezwaren
- ▁wiltord
- ▁visite
- ▁aafje
- ▁bewijzen
- profiel
- ▁fusie
- ▁ongelukkig
- ▁pijp
- ▁vermeend
- ▁verzorgd
- eeuwse
- ▁vermeld
- fragment
- gedrongen
- ▁bevestigd
- ▁ingrijpen
- ▁journalistiek
- ▁kootwijkerbroek
- ▁organiseer
- ▁palestijnen
- ▁sjonge
- ▁afgebroken
- ▁bepaalt
- ▁flikker
- ▁verzekerd
- ▁windows
- geknipt
- ▁claus
- ▁geleverd
- ▁schone
- blokken
- chirurg
- ▁aangifte
- ▁concreet
- ▁desnoods
- ▁dikwijls
- ▁godverdomme
- ▁introductie
- ▁labyrint
- ▁macedonië
- ▁particulier
- ▁roermond
- ▁solliciteren
- ▁vierenveertig
- ▁hiernaartoe
- ▁langdurig
- ▁keeper
- ▁stefan
- ▁auteur
- ▁gedood
- ▁modder
- ▁sindsdien
- spoelen
- ▁george
- ▁asiel
- ▁gemeld
- ▁schuil
- ▁stedelijk
- broeder
- telefoon
- jongen
- betaald
- ▁vergader
- academie
- functionaris
- ▁bereikbaar
- ▁fluisterde
- ▁garantie
- ▁griekse
- ▁omzeilen
- ▁portugal
- ▁postzegel
- ▁repetitie
- ▁jacques
- ▁jordaan
- ▁kopiëren
- patroon
- ▁strafrechtelijk
- ▁vijver
- ▁moniek
- ▁gewaardeerd
- ▁volume
- ▁johannes
- ▁aangepast
- ▁bitter
- ▁voorlichting
- ▁inspectie
- korps
- bestuur
- hebben
- muts
- ▁elektrisch
- ▁geneeskunde
- ▁getraind
- ▁grammatica
- ▁informeren
- ▁politici
- ▁promotie
- ▁socialistische
- ▁ticket
- ▁vieira
- ▁democratie
- ▁ingeleverd
- ▁ravage
- ▁bagger
- ▁logeer
- vloed
- bewust
- problematiek
- ▁ambassade
- ▁automobilist
- ▁leboeuf
- ▁rayford
- ▁realiseer
- ▁sittard
- ▁treffen
- ▁gewild
- ▁afgerond
- ▁verrassend
- ▁fraai
- ▁jessica
- ontsteking
- ▁benauwd
- ▁partnerschap
- ▁schok
- zeggen
- functie
- abonnement
- ▁fatsoenlijk
- ▁geopereerd
- ▁hoogleraar
- ▁joegoslavisch
- ▁kwartfinale
- ▁marathon
- ▁moeizaam
- ▁moskou
- ▁relevant
- ▁schoonmoeder
- ▁stimuleren
- ▁klooster
- ▁verbazing
- ▁werkgelegenheid
- ▁zesentachtig
- ▁geschilderd
- ▁stijf
- ▁gelokt
- ▁uitgaven
- ▁gemaild
- ▁concept
- ▁pleit
- ▁beschouwd
- geschoven
- effect
- college
- ▁verbind
- gejaagd
- installatie
- ▁abstract
- ▁alternatieve
- ▁concrete
- ▁deelnemers
- ▁emotioneel
- ▁glucose
- ▁inclusief
- ▁vacature
- ▁wimbledon
- ▁zesenzestig
- ▁volgorde
- ▁bisschop
- ▁verzonnen
- ▁appelsap
- ▁handicap
- dreiging
- ▁bewering
- ▁mopper
- ▁abortus
- parade
- ▁allereerst
- roddel
- bruin
- ▁recherche
- ▁eenenveertig
- ▁fluistert
- ▁kroonprins
- ▁opgeruimd
- ▁psychisch
- financiering
- ▁suriname
- ▁verdween
- kliniek
- ▁arthur
- ▁mompel
- ▁testament
- ▁vloek
- ▁bejaarden
- ▁ridder
- ▁sukkel
- ▁sneu
- ▁aanmerking
- ▁pittig
- ▁verwond
- ▁prioriteit
- ▁geconstateerd
- ▁heintze
- ▁infrastructuur
- ▁instructie
- ▁uitgestippeld
- ▁uitgezocht
- ▁willekeurig
- ▁fietspaden
- ▁minimum
- ▁wahid
- ▁driekwart
- ▁gescoord
- ▁veiling
- ▁tijdstip
- ▁permanent
- ▁vorst
- ▁lomp
- ▁verklaard
- ▁staarde
- ervaring
- frisse
- hartig
- fabrikant
- ▁clinton
- ▁digitale
- ▁maurice
- ▁paprika
- ▁produceren
- ▁cirkel
- ▁draagvlak
- ▁oceaan
- ▁vakbonden
- ▁module
- ▁zweven
- bepaling
- circuit
- speelt
- ▁criminele
- ▁criteria
- ▁drieënnegentig
- ▁emotionele
- ▁filosofie
- ▁gadverdamme
- ▁greenpeace
- ▁hilversum
- ▁jeruzalem
- ▁negenenveertig
- ▁rabobank
- ▁tweeënhalf
- lerarenopleiding
- ▁bizar
- ▁gestart
- ▁zuinig
- ▁kermis
- ▁voorgelezen
- ▁verschenen
- ▁beroerd
- ▁vuist
- premie
- militair
- cultuur
- ▁vierenvijftig
- '6'
- bevoegdheid
- brittannië
- technologie
- ▁autoriteiten
- ▁drieënvijftig
- ▁flauwekul
- ▁geconfronteerd
- ▁gewaarschuwd
- ▁huiseigenaar
- ▁marokko
- ▁middeleeuwen
- ▁onschuldig
- ▁portemonnee
- ▁vreugde
- ▁alkmaar
- ▁bedreigd
- ▁onrijpe
- ▁oorzaken
- ▁opbergruimte
- ▁laptop
- ▁sigaren
- ▁bestuderen
- ▁fototoestel
- ▁anouk
- ▁afgezien
- ▁ondernemen
- ▁mededeling
- ▁voorwerp
- ▁nichtje
- ▁special
- trekking
- ▁bianca
- ▁champagne
- ▁communiceren
- ▁demonstranten
- ▁geblesseerd
- ▁gewelddadig
- ▁glaasje
- ▁integraal
- ▁privacy
- ▁senioren
- ▁telecom
- ▁uitbreiding
- ▁vastgelegd
- ▁vernieuwing
- ▁ivbo
- ▁samenwonen
- ▁scoort
- ▁grootvader
- stijging
- ▁gevaren
- ▁complete
- affaire
- ▁skate
- schatting
- personeel
- artikel
- spreid
- strategie
- ▁magnetron
- ▁chinezen
- ▁conditie
- ▁dominant
- ▁luxemburg
- ▁misverstand
- ▁achtentachtig
- percentage
- ▁documentaire
- business
- ▁bevond
- ▁opgeleverd
- ▁klinker
- ▁geleidelijk
- ç
- ä
- '5'
- '7'
- ø
- ñ
- â
- á
- '9'
- î
- í
- ô
- ó
- Â
- É
- ã
- Ö
- Ü
- û
- '*'
- Å
- Ç
- Î
- ê
- <sos/eos>
init: null
input_size: 83
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list_cgn_jasming1g2/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: null
frontend_conf: {}
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/stats_cgn_jasming1g2_g1vc2foldgcdc_83dim/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: contextual_block_conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 31
block_size: 0
hop_size: 16
look_ahead: 0
init_average: true
ctx_pos_enc: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 5
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.5
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
MrRobotoAI/127-Q4_K_M-GGUF | MrRobotoAI | "2025-05-09T21:33:02Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/127",
"base_model:quantized:MrRobotoAI/127",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-09T21:32:39Z" | ---
base_model: MrRobotoAI/127
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/127-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/127`](https://huggingface.co/MrRobotoAI/127) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/127) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/127-Q4_K_M-GGUF --hf-file 127-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/127-Q4_K_M-GGUF --hf-file 127-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/127-Q4_K_M-GGUF --hf-file 127-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/127-Q4_K_M-GGUF --hf-file 127-q4_k_m.gguf -c 2048
```
|
jimkap/APEL-Qwen3-0.6B-v.3 | jimkap | "2025-05-09T21:30:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T21:29:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF | mradermacher | "2025-05-09T21:27:19Z" | 620 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"code",
"error-correction",
"R1",
"14B",
"Reasoning",
"math",
"en",
"base_model:prithivMLmods/Canum-Venaticorum-14B-B.1",
"base_model:quantized:prithivMLmods/Canum-Venaticorum-14B-B.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-13T07:21:26Z" | ---
base_model: prithivMLmods/Canum-Venaticorum-14B-B.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- code
- error-correction
- R1
- 14B
- Reasoning
- math
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Canum-Venaticorum-14B-B.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Canum-Venaticorum-14B-B.1-i1-GGUF/resolve/main/Canum-Venaticorum-14B-B.1.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
IAHispano/Applio | IAHispano | "2025-05-09T21:27:15Z" | 0 | 116 | transformers | [
"transformers",
"onnx",
"AI",
"RVC",
"VITS",
"VC",
"Voice Conversion",
"Voice2Voice",
"audio-to-audio",
"dataset:CSTR-Edinburgh/vctk",
"base_model:lj1995/VoiceConversionWebUI",
"base_model:quantized:lj1995/VoiceConversionWebUI",
"license:mit",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-10-03T18:58:40Z" | ---
pipeline_tag: audio-to-audio
tags:
- AI
- RVC
- VITS
- VC
- Voice Conversion
- Voice2Voice
license: mit
datasets:
- CSTR-Edinburgh/vctk
base_model:
- lj1995/VoiceConversionWebUI
---
<h1 align="center">
<a href="https://applio.org" target="_blank"><img src="https://github.com/IAHispano/Applio/assets/133521603/78e975d8-b07f-47ba-ab23-5a31592f322a" alt="Applio"></a>
</h1>
<p align="center">A simple, high-quality voice conversion tool, focused on ease of use and performance.</p>
<p align="center">
<a href="https://applio.org" target="_blank">🌐 Website</a>
•
<a href="https://docs.applio.org" target="_blank">📚 Documentation</a>
•
<a href="https://discord.gg/urxFjYmYYh" target="_blank">☎️ Discord</a>
</p>
<p align="center">
<a href="https://github.com/IAHispano/Applio-Plugins" target="_blank">🛒 Plugins</a>
•
<a href="https://huggingface.co/IAHispano/Applio/tree/main/Compiled" target="_blank">📦 Compiled</a>
•
<a href="https://applio.org/playground" target="_blank">🎮 Playground</a>
•
<a href="https://colab.research.google.com/github/iahispano/applio/blob/master/assets/Applio.ipynb" target="_blank">🔎 Google Colab (UI)</a>
•
<a href="https://colab.research.google.com/github/iahispano/applio/blob/master/assets/Applio_NoUI.ipynb" target="_blank">🔎 Google Colab (No UI)</a>
</p>
## Introduction
Applio is a powerful voice conversion tool focused on simplicity, quality, and performance. Whether you're an artist, developer, or researcher, Applio offers a straightforward platform for high-quality voice transformations. Its flexible design allows for customization through plugins and configurations, catering to a wide range of projects.
## Terms of Use
The use of Applio is entirely at your own discretion and responsibility. By using this tool, you agree to:
1. Respect all applicable copyrights, intellectual property rights, and privacy rights. Ensure that any audio or material processed through Applio is either owned by you or used with explicit permission from the rightful owner.
2. Avoid using Applio in ways that may harm, defame, or infringe upon the rights of others. This includes, but is not limited to, the creation or distribution of unauthorized content.
3. Comply with all relevant laws and regulations governing the use of AI and voice transformation tools in your jurisdiction.
Applio and its contributors are not liable for any misuse of the tool. The responsibility for adhering to ethical practices and legal compliance lies solely with the user. Applio does not endorse or support any activities that result in harm to individuals, groups, or entities. All official models distributed by Applio have been trained under public use datasets such as VCTK.
## Getting Started
### 1. Installation
Run the installation script based on your operating system:
- **Windows:** Double-click `run-install.bat`.
- **Linux/macOS:** Execute `run-install.sh`.
### 2. Running Applio
Start Applio using:
- **Windows:** Double-click `run-applio.bat`.
- **Linux/macOS:** Run `run-applio.sh`.
This launches the Gradio interface in your default browser.
### 3. Optional: TensorBoard Monitoring
To monitor training or visualize data:
- **Windows:** Run `run-tensorboard.bat`.
- **Linux/macOS:** Run `run-tensorboard.sh`.
For more detailed instructions, visit the [documentation](https://docs.applio.org).
## Commercial Usage
For commercial use, follow the [MIT license](./LICENSE) and contact us at [email protected] to ensure ethical use. The use of Applio-generated audio files must comply with applicable copyrights. Consider supporting Applio’s development [through a donation](https://ko-fi.com/iahispano).
## References
Applio is made possible thanks to these projects and their references:
- [gradio-screen-recorder](https://huggingface.co/spaces/gstaff/gradio-screen-recorder) by gstaff
- [rvc-cli](https://github.com/blaisewf/rvc-cli) by blaisewf
### Contributors
<a href="https://github.com/IAHispano/Applio/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=IAHispano/Applio" />
</a> |
ajagota71/toxicity-reward-model-max-margin-seed-300-pythia-160m-checkpoint-50 | ajagota71 | "2025-05-09T21:27:00Z" | 0 | 0 | null | [
"safetensors",
"gpt_neox",
"region:us"
] | null | "2025-05-09T21:26:36Z" | # toxicity-reward-model-max-margin-seed-300-pythia-160m-checkpoint-50
This model was trained using max_margin IRL to learn toxicity reward signals.
Base model: EleutherAI/pythia-160m
Original model: EleutherAI/pythia-160M
Detoxified model: ajagota71/pythia-160m-detox-epoch-100
---
language: en
tags:
- toxicity
- reward-model
- irl
library_name: transformers
base_model: pythia-160m
pipeline_tag: text-classification
---
|
mradermacher/Q-Star-Rag-7.6B-GGUF | mradermacher | "2025-05-09T21:23:26Z" | 141 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:datumo/E-Star-7.6B",
"base_model:quantized:datumo/E-Star-7.6B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-15T04:43:58Z" | ---
base_model: datumo/E-Star-7.6B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/datumo/E-Star-7.6B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Q-Star-Rag-7.6B-GGUF/resolve/main/Q-Star-Rag-7.6B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dfh55y45/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-agile_darting_ostrich | dfh55y45 | "2025-05-09T21:21:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am agile darting ostrich",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-05-01T04:35:53Z" | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-agile_darting_ostrich
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am agile darting ostrich
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-agile_darting_ostrich
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dfh55y45/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-agile_darting_ostrich", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Mungert/shuttle-3.5-GGUF | Mungert | "2025-05-09T21:20:58Z" | 50 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2025-05-08T18:19:48Z" | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/shuttleai/shuttle-3.5/blob/main/LICENSE
pipeline_tag: text-generation
language:
- en
tags:
- chat
---
# <span style="color: #7FFF7F;">shuttle-3.5 GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`8c83449`](https://github.com/ggerganov/llama.cpp/commit/8c83449cb780c201839653812681c3a4cf17feed).
## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span>
Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
### **Benchmark Context**
All tests conducted on **Llama-3-8B-Instruct** using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
### **Method**
- **Dynamic Precision Allocation**:
- First/Last 25% of layers → IQ4_XS (selected layers)
- Middle 50% → IQ2_XXS/IQ3_S (increase efficiency)
- **Critical Component Protection**:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1-2bit
### **Quantization Performance Comparison (Llama-3-8B)**
| Quantization | Standard PPL | DynamicGate PPL | Δ PPL | Std Size | DG Size | Δ Size | Std Speed | DG Speed |
|--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------|
| IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
| IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
| IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
| IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
| IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
**Key**:
- PPL = Perplexity (lower is better)
- Δ PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
**Key Improvements:**
- 🔥 **IQ1_M** shows massive 43.9% perplexity reduction (27.46 → 15.41)
- 🚀 **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB
- ⚡ **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization
**Tradeoffs:**
- All variants have modest size increases (0.1-0.3GB)
- Inference speeds remain comparable (<5% difference)
### **When to Use These Models**
📌 **Fitting models into GPU VRAM**
✔ **Memory-constrained deployments**
✔ **Cpu and Edge Devices** where 1-2bit errors can be tolerated
✔ **Research** into ultra-low-bit quantization
## **Choosing the Right Model Format**
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
- Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
📌 **Use BF16 if:**
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
✔ You want **higher precision** while saving memory.
✔ You plan to **requantize** the model into another format.
📌 **Avoid BF16 if:**
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.
---
### **F16 (Float 16) – More widely supported than BF16**
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
📌 **Use F16 if:**
✔ Your hardware supports **FP16** but **not BF16**.
✔ You need a **balance between speed, memory usage, and accuracy**.
✔ You are running on a **GPU** or another device optimized for FP16 computations.
📌 **Avoid F16 if:**
❌ Your device lacks **native FP16 support** (it may run slower than expected).
❌ You have memory limitations.
---
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
📌 **Use Quantized Models if:**
✔ You are running inference on a **CPU** and need an optimized model.
✔ Your device has **low VRAM** and cannot load full-precision models.
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
📌 **Avoid Quantized Models if:**
❌ You need **maximum accuracy** (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
---
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
- **IQ3_S**: Small block size for **maximum memory efficiency**.
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
---
### **Summary Table: Model Format Selection**
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|--------------|------------|---------------|----------------------|---------------|
| **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
| **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
| **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
| **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
| **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
| **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
---
## **Included Files & Details**
### `shuttle-3.5-bf16.gguf`
- Model weights preserved in **BF16**.
- Use this if you want to **requantize** the model into a different format.
- Best if your device supports **BF16 acceleration**.
### `shuttle-3.5-f16.gguf`
- Model weights stored in **F16**.
- Use if your device supports **FP16**, especially if BF16 is not available.
### `shuttle-3.5-bf16-q8_0.gguf`
- **Output & embeddings** remain in **BF16**.
- All other layers quantized to **Q8_0**.
- Use if your device supports **BF16** and you want a quantized version.
### `shuttle-3.5-f16-q8_0.gguf`
- **Output & embeddings** remain in **F16**.
- All other layers quantized to **Q8_0**.
### `shuttle-3.5-q4_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q4_K**.
- Good for **CPU inference** with limited memory.
### `shuttle-3.5-q4_k_s.gguf`
- Smallest **Q4_K** variant, using less memory at the cost of accuracy.
- Best for **very low-memory setups**.
### `shuttle-3.5-q6_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q6_K** .
### `shuttle-3.5-q8_0.gguf`
- Fully **Q8** quantized model for better accuracy.
- Requires **more memory** but offers higher precision.
### `shuttle-3.5-iq3_xs.gguf`
- **IQ3_XS** quantization, optimized for **extreme memory efficiency**.
- Best for **ultra-low-memory devices**.
### `shuttle-3.5-iq3_m.gguf`
- **IQ3_M** quantization, offering a **medium block size** for better accuracy.
- Suitable for **low-memory devices**.
### `shuttle-3.5-q4_0.gguf`
- Pure **Q4_0** quantization, optimized for **ARM devices**.
- Best for **low-memory environments**.
- Prefer IQ4_NL for better accuracy.
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
❤ **Please click "Like" if you find this useful!**
Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Free Network Monitor](https://readyforquantum.com/dashboard/?assistant=open)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4o-mini)
- `HugLLM` (Hugginface Open-source)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**)
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4o-mini** for:
- **Create custom cmd processors to run .net code on Free Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
- 🔑 Get more tokens by logging in or [downloading our Free Network Monitor Agent with integrated AI Assistant](https://readyforquantum.com/download)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API
### 💡 **Example commands to you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
<p style="font-size:20px;" align="left">
<div style="border-radius: 15px;">
<img
src="https://storage.shuttleai.com/shuttle-3.5.png"
alt="ShuttleAI Thumbnail"
style="width: auto; height: auto; margin-left: 0; object-fit: cover; border-radius: 15px;">
</div>
## Shuttle-3.5
### ☁️ <a href="https://shuttleai.com/" target="_blank">Use via API</a> • 💬 <a href="https://shuttlechat.com/" target="_blank">ShuttleChat</a>
We are excited to introduce Shuttle-3.5, a fine-tuned version of [Qwen3 32b](https://huggingface.co/Qwen/Qwen3-32B), emulating the writing style of Claude 3 models and thoroughly trained on role-playing data.
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Shuttle 3.5** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 32.8B
- Number of Paramaters (Non-Embedding): 31.2B
- Number of Layers: 64
- Number of Attention Heads (GQA): 64 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
## Fine-Tuning Details
- **Training Setup**: The model was trained on 130 million tokens for 40 hours on an H100 GPU. |
Heshamproduct/gemma-3-12b-elderly-care-merged-v11 | Heshamproduct | "2025-05-09T21:20:39Z" | 93 | 0 | null | [
"safetensors",
"gemma3",
"gemma",
"gemma-3",
"fine-tuned",
"elderly-care",
"conversational-ai",
"text-generation",
"merged-lora",
"text2text-generation",
"en",
"base_model:google/gemma-3-12b-it",
"base_model:finetune:google/gemma-3-12b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-05-08T10:18:35Z" | ---
license: apache-2.0
language: en
tags:
- gemma
- gemma-3
- fine-tuned
- elderly-care
- conversational-ai
- text-generation
- merged-lora
pipeline_tag: text2text-generation
base_model: google/gemma-3-12b-it
---
# Gemma-3-12B-IT Fine-tuned for Elderly Care (Merged)
This is a fine-tuned version of google/gemma-3-12b-it specialized for elderly care conversations.
The model was fine-tuned using LoRA (Low-Rank Adaptation) and then the adapters were merged into the base model for easier deployment and inference.
## Model Description
This model aims to be an empathetic, patient, and helpful conversational assistant for elderly individuals. It has been trained on a dataset of high-quality prompt-response pairs relevant to scenarios encountered in elderly care. The fine-tuning focused on:
* **Companionship:** Engaging in friendly, supportive, and warm conversations.
* **Information Seeking (Non-Medical):** Answering general knowledge questions and providing information that can be found via web searches (when integrated into an application).
* **Assistance Requests:** Understanding requests for simple reminders or task-related help.
* **Storytelling:** Generating short, uplifting stories.
**Base Model:** google/gemma-3-12b-it
**Fine-tuning Data:** high_quality_data_v1_b20_concurrent_v2.jsonl
**Fine-tuning Method:** LoRA
* r: 16
* lora_alpha: 32
* lora_dropout: 0.05
* Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
**Source Adapters (Merged into this Model):** gemma3_12b_elderly_care_adapters_v11
## Intended Uses & Limitations
### Intended Uses:
* **Conversational Companion:** Providing a friendly and engaging chat partner for elderly users to combat loneliness and encourage interaction.
* **Information Resource:** Answering general knowledge questions and, if connected to a search API, providing up-to-date information on weather, news, etc.
* **Simple Task Assistance:** Helping with reminders for medication (as a prompt, not a reliable scheduler) or other daily activities, if integrated into a larger application.
* **Storytelling:** Offering light entertainment through short, positive stories.
### Limitations:
* **No Medical Advice:** This model is **NOT** a substitute for professional medical advice, diagnosis, or treatment. Always consult a qualified healthcare provider for any health concerns.
* **Potential for Inaccuracies:** Like all LLMs, the model can generate incorrect or nonsensical information (hallucinations). Responses should be critically evaluated.
* **Not for Critical Decisions:** Do not rely on this model for making critical decisions regarding health, finance, or safety.
* **Bias:** The model may reflect biases present in its training data (both the base model's pre-training data and the fine-tuning dataset).
* **Safety:** While the fine-tuning data is curated, the model might still generate unexpected or inappropriate content in some edge cases. Robust safety filtering in the application layer is recommended.
* **Context Window:** Limited by max_seq_length: 1024 during fine-tuning. Long conversations might lose earlier context.
## How to Use
This is a fully merged model. You can load it directly using AutoModelForCausalLM and AutoTokenizer from the Hugging Face transformers library.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "Heshamproduct/gemma-3-12b-elderly-care-merged-v11"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto", # Handles multi-GPU or CPU if CUDA is not available
torch_dtype=torch.bfloat16 # Recommended for Gemma models; "auto" also works
)
# Gemma instruction format is important for optimal performance
prompt = "### Instruction:\nTell me a short, happy story about a sunny day.\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate response
# Adjust generation parameters as needed
outputs = model.generate(
**inputs,
max_new_tokens=250,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.1,
pad_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
# The output will likely include the prompt, so you might want to clean it:
response_only = generated_text.split("### Response:\n")[-1].strip()
print(response_only)
```
## Training Procedure
The model was fine-tuned from google/gemma-3-12b-it using the SFTTrainer from the trl library and PEFT LoRA. The adapters used for this merged model originated from the directory gemma3_12b_elderly_care_adapters_v11.
### Key Training Parameters (for the source adapters):
* **Base Model:** google/gemma-3-12b-it
* **Fine-tuning Dataset:** high_quality_data_v1_b20_concurrent_v2.jsonl
* **Formatting Function:** Custom function to structure data as `### Instruction:\n{prompt}\n\n### Response:\n{response}`.
* **Max Sequence Length:** 1024 tokens
* **LoRA Configuration:**
* r: 16
* lora_alpha: 32
* lora_dropout: 0.05
* target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
* **Training Arguments:**
* per_device_train_batch_size: 2
* gradient_accumulation_steps: 8 (Effective batch size: 16)
* num_train_epochs: 3.0
* learning_rate: 2e-4
* optim: "paged_adamw_8bit"
* bf16: True
* logging_steps: 10
* save_steps: 100
* warmup_ratio: 0.03
* seed: 42
* **Quantization (during fine-tuning of adapters):**
* load_in_4bit: True
* bnb_4bit_quant_type: "nf4"
* bnb_4bit_compute_dtype: torch.bfloat16
* bnb_4bit_use_double_quant: True
The fine-tuning process that generated the source adapters was logged in a file similar to finetune_gemma3_12b_hq_v24.log (potentially with a different version number if adapters v11 were from an earlier run than the v24 log).
## Evaluation Results
Formal quantitative evaluation (e.g., perplexity on a holdout set, standardized benchmarks) has not yet been performed. Qualitative assessment during development showed improved adherence to instructions and persona consistency for elderly care scenarios compared to the base model.
---
Model Card created by Heshamproduct.
This model was fine-tuned and merged as part of an elderly care chatbot project.
Repository: Heshamproduct/gemma-3-12b-elderly-care-merged-v11 |
Kevinlv/medical-question-model | Kevinlv | "2025-05-09T21:19:45Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-24T07:43:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thinusbothma/Annie | thinusbothma | "2025-05-09T21:19:01Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:HiDream-ai/HiDream-I1-Full",
"base_model:adapter:HiDream-ai/HiDream-I1-Full",
"license:cc-by-sa-4.0",
"region:us"
] | text-to-image | "2025-05-09T21:18:56Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Chat.jfif
base_model: HiDream-ai/HiDream-I1-Full
instance_prompt: null
license: cc-by-sa-4.0
---
# FFR
<Gallery />
## Model description
Woman
## Download model
[Download](/thinusbothma/Annie/tree/main) them in the Files & versions tab.
|
pytorch/Phi-4-mini-instruct-int4wo-hqq | pytorch | "2025-05-09T21:18:03Z" | 1,058 | 0 | transformers | [
"transformers",
"pytorch",
"phi3",
"text-generation",
"torchao",
"phi",
"phi4",
"nlp",
"code",
"math",
"chat",
"conversational",
"custom_code",
"multilingual",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:quantized:microsoft/Phi-4-mini-instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-08T04:31:34Z" | ---
library_name: transformers
tags:
- torchao
- phi
- phi4
- nlp
- code
- math
- chat
- conversational
license: mit
language:
- multilingual
base_model:
- microsoft/Phi-4-mini-instruct
pipeline_tag: text-generation
---
[Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) int4 weight only quantization, using [hqq](https://mobiusml.github.io/hqq_blog/) algorithm for improved accuracy, by PyTorch team. Use it directly or serve using [vLLM](https://docs.vllm.ai/en/latest/) for 67% VRAM reduction and 12-20% speedup on A100 GPUs.
# Inference with vLLM
Install vllm nightly and torchao nightly to get some recent changes:
```
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
pip install git+https://github.com/pytorch/ao.git
```
## Code Example
```Py
from vllm import LLM, SamplingParams
# Sample prompts.
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
if __name__ == '__main__':
# Create an LLM.
llm = LLM(model="pytorch/Phi-4-mini-instruct-int4wo-hqq")
# Generate texts from the prompts.
# The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
print("\nGenerated Outputs:\n" + "-" * 60)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}")
print(f"Output: {generated_text!r}")
print("-" * 60)
```
Note: please use `VLLM_DISABLE_COMPILE_CACHE=1` to disable compile cache when running this code, e.g. `VLLM_DISABLE_COMPILE_CACHE=1 python example.py`, since there are some issues with the composability of compile in vLLM and torchao,
this is expected be resolved in pytorch 2.8.
## Serving
Then we can serve with the following command:
```Shell
vllm serve pytorch/Phi-4-mini-instruct-int4wo-hqq --tokenizer microsoft/Phi-4-mini-instruct -O3
```
# Inference with Transformers
Install the required packages:
```Shell
pip install git+https://github.com/huggingface/transformers@main
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
pip install torch
pip install accelerate
```
Example:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_path = "pytorch/Phi-4-mini-instruct-int4wo-hqq"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
# Quantization Recipe
Install the required packages:
```Shell
pip install git+https://github.com/huggingface/transformers@main
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
pip install torch
pip install accelerate
```
Use the following code to get the quantized model:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
model_id = "microsoft/Phi-4-mini-instruct"
from torchao.quantization import Int4WeightOnlyConfig
quant_config = Int4WeightOnlyConfig(group_size=128, use_hqq=True)
quantization_config = TorchAoConfig(quant_type=quant_config)
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Push to hub
USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
save_to = f"{USER_ID}/{MODEL_NAME}-int4wo-hqq"
quantized_model.push_to_hub(save_to, safe_serialization=False)
tokenizer.push_to_hub(save_to)
# Manual Testing
prompt = "Hey, are you conscious? Can you talk to me?"
messages = [
{
"role": "system",
"content": "",
},
{"role": "user", "content": prompt},
]
templated_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
print("Prompt:", prompt)
print("Templated prompt:", templated_prompt)
inputs = tokenizer(
templated_prompt,
return_tensors="pt",
).to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Response:", output_text[0][len(prompt):])
```
Note: to `push_to_hub` you need to run
```Shell
pip install -U "huggingface_hub[cli]"
huggingface-cli login
```
and use a token with write access, from https://huggingface.co/settings/tokens
# Model Quality
We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model.
Need to install lm-eval from source:
https://github.com/EleutherAI/lm-evaluation-harness#install
## baseline
```Shell
lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks hellaswag --device cuda:0 --batch_size 8
```
## int4 weight only quantization with hqq (int4wo-hqq)
```Shell
lm_eval --model hf --model_args pretrained=pytorch/Phi-4-mini-instruct-int4wo-hqq --tasks hellaswag --device cuda:0 --batch_size 8
```
| Benchmark | | |
|----------------------------------|----------------|---------------------------|
| | Phi-4-mini-ins | Phi-4-mini-ins-int4wo-hqq |
| **Popular aggregated benchmark** | | |
| mmlu (0-shot) | 66.73 | 63.56 |
| mmlu_pro (5-shot) | 46.43 | 36.74 |
| **Reasoning** | | |
| arc_challenge (0-shot) | 56.91 | 54.86 |
| gpqa_main_zeroshot | 30.13 | 30.58 |
| HellaSwag | 54.57 | 53.54 |
| openbookqa | 33.00 | 34.40 |
| piqa (0-shot) | 77.64 | 76.33 |
| social_iqa | 49.59 | 47.90 |
| truthfulqa_mc2 (0-shot) | 48.39 | 46.44 |
| winogrande (0-shot) | 71.11 | 71.51 |
| **Multilingual** | | |
| mgsm_en_cot_en | 60.8 | 59.6 |
| **Math** | | |
| gsm8k (5-shot) | 81.88 | 74.37 |
| mathqa (0-shot) | 42.31 | 42.75 |
| **Overall** | **55.35** | **53.28** |
# Peak Memory Usage
## Results
| Benchmark | | |
|------------------|----------------|--------------------------------|
| | Phi-4 mini-Ins | Phi-4-mini-instruct-int4wo-hqq |
| Peak Memory (GB) | 8.91 | 2.98 (67% reduction) |
## Code Example
We can use the following code to get a sense of peak memory usage during inference:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
# use "microsoft/Phi-4-mini-instruct" or "pytorch/Phi-4-mini-instruct-int4wo-hqq"
model_id = "pytorch/Phi-4-mini-instruct-int4wo-hqq"
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
torch.cuda.reset_peak_memory_stats()
prompt = "Hey, are you conscious? Can you talk to me?"
messages = [
{
"role": "system",
"content": "",
},
{"role": "user", "content": prompt},
]
templated_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
print("Prompt:", prompt)
print("Templated prompt:", templated_prompt)
inputs = tokenizer(
templated_prompt,
return_tensors="pt",
).to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Response:", output_text[0][len(prompt):])
mem = torch.cuda.max_memory_reserved() / 1e9
print(f"Peak Memory Usage: {mem:.02f} GB")
```
# Model Performance
Our int4wo is only optimized for batch size 1, so expect some slowdown with larger batch sizes, we expect this to be used in local server deployment for single or a few users where the decode tokens per second will matters more than the time to first token.
## Results (A100 machine)
| Benchmark (Latency) | | |
|----------------------------------|----------------|--------------------------|
| | Phi-4 mini-Ins | phi4-mini-int4wo-hqq |
| latency (batch_size=1) | 2.46s | 2.2s (12% speedup) |
| serving (num_prompts=1) | 0.87 req/s | 1.05 req/s (20% speedup) |
Note the result of latency (benchmark_latency) is in seconds, and serving (benchmark_serving) is in number of requests per second.
Int4 weight only is optimized for batch size 1 and short input and output token length, please stay tuned for models optimized for larger batch sizes or longer token length.
## Setup
Get vllm source code:
```Shell
git clone [email protected]:vllm-project/vllm.git
```
Install vllm
```
VLLM_USE_PRECOMPILED=1 pip install --editable .
```
Run the benchmarks under `vllm` root folder:
## benchmark_latency
### baseline
```Shell
python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model microsoft/Phi-4-mini-instruct --batch-size 1
```
### int4wo-hqq
```Shell
VLLM_DISABLE_COMPILE_CACHE=1 python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model pytorch/Phi-4-mini-instruct-int4wo-hqq --batch-size 1
```
## benchmark_serving
We benchmarked the throughput in a serving environment.
Download sharegpt dataset:
```Shell
wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
```
Other datasets can be found in: https://github.com/vllm-project/vllm/tree/main/benchmarks
Note: you can change the number of prompts to be benchmarked with `--num-prompts` argument for `benchmark_serving` script.
### baseline
Server:
```Shell
vllm serve microsoft/Phi-4-mini-instruct --tokenizer microsoft/Phi-4-mini-instruct -O3
```
Client:
```Shell
python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model microsoft/Phi-4-mini-instruct --num-prompts 1
```
### int4wo-hqq
Server:
```Shell
VLLM_DISABLE_COMPILE_CACHE=1 vllm serve pytorch/Phi-4-mini-instruct-int4wo-hqq --tokenizer microsoft/Phi-4-mini-instruct -O3 --pt-load-map-location cuda:0
```
Client:
```Shell
python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model pytorch/Phi-4-mini-instruct-int4wo-hqq --num-prompts 1
```
# Disclaimer
PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein. |
unsloth/Mistral-Small-3.1-24B-Instruct-2503 | unsloth | "2025-05-09T21:17:09Z" | 2,311 | 13 | vllm | [
"vllm",
"safetensors",
"mistral3",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:finetune:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"license:apache-2.0",
"region:us"
] | null | "2025-03-18T20:01:14Z" | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
inference: false
base_model:
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# Model Card for Mistral-Small-3.1-24B-Instruct-2503
Building upon Mistral Small 3 (2501), Mistral Small 3.1 (2503) **adds state-of-the-art vision understanding** and enhances **long context capabilities up to 128k tokens** without compromising text performance.
With 24 billion parameters, this model achieves top-tier capabilities in both text and vision tasks.
This model is an instruction-finetuned version of: [Mistral-Small-3.1-24B-Base-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503).
Mistral Small 3.1 can be deployed locally and is exceptionally "knowledge-dense," fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
It is ideal for:
- Fast-response conversational agents.
- Low-latency function calling.
- Subject matter experts via fine-tuning.
- Local inference for hobbyists and organizations handling sensitive data.
- Programming and math reasoning.
- Long document understanding.
- Visual understanding.
For enterprises requiring specialized capabilities (increased context, specific modalities, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.
Learn more about Mistral Small 3.1 in our [blog post](https://mistral.ai/news/mistral-small-3-1/).
## Key Features
- **Vision:** Vision capabilities enable the model to analyze images and provide insights based on visual content in addition to text.
- **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, Farsi.
- **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting.
- **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities.
- **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window:** A 128k context window.
- **System Prompt:** Maintains strong adherence and support for system prompts.
- **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark Results
When available, we report numbers previously published by other model providers, otherwise we re-evaluate them using our own evaluation harness.
### Pretrain Evals
| Model | MMLU (5-shot) | MMLU Pro (5-shot CoT) | TriviaQA | GPQA Main (5-shot CoT)| MMMU |
|--------------------------------|---------------|-----------------------|------------|-----------------------|-----------|
| **Small 3.1 24B Base** | **81.01%** | **56.03%** | 80.50% | **37.50%** | **59.27%**|
| Gemma 3 27B PT | 78.60% | 52.20% | **81.30%** | 24.30% | 56.10% |
### Instruction Evals
#### Text
| Model | MMLU | MMLU Pro (5-shot CoT) | MATH | GPQA Main (5-shot CoT) | GPQA Diamond (5-shot CoT )| MBPP | HumanEval | SimpleQA (TotalAcc)|
|--------------------------------|-----------|-----------------------|------------------------|------------------------|---------------------------|-----------|-----------|--------------------|
| **Small 3.1 24B Instruct** | 80.62% | 66.76% | 69.30% | **44.42%** | **45.96%** | 74.71% | **88.41%**| **10.43%** |
| Gemma 3 27B IT | 76.90% | **67.50%** | **89.00%** | 36.83% | 42.40% | 74.40% | 87.80% | 10.00% |
| GPT4o Mini | **82.00%**| 61.70% | 70.20% | 40.20% | 39.39% | 84.82% | 87.20% | 9.50% |
| Claude 3.5 Haiku | 77.60% | 65.00% | 69.20% | 37.05% | 41.60% | **85.60%**| 88.10% | 8.02% |
| Cohere Aya-Vision 32B | 72.14% | 47.16% | 41.98% | 34.38% | 33.84% | 70.43% | 62.20% | 7.65% |
#### Vision
| Model | MMMU | MMMU PRO | Mathvista | ChartQA | DocVQA | AI2D | MM MT Bench |
|--------------------------------|------------|-----------|-----------|-----------|-----------|-------------|-------------|
| **Small 3.1 24B Instruct** | 64.00% | **49.25%**| **68.91%**| 86.24% | **94.08%**| **93.72%** | **7.3** |
| Gemma 3 27B IT | **64.90%** | 48.38% | 67.60% | 76.00% | 86.60% | 84.50% | 7 |
| GPT4o Mini | 59.40% | 37.60% | 56.70% | 76.80% | 86.70% | 88.10% | 6.6 |
| Claude 3.5 Haiku | 60.50% | 45.03% | 61.60% | **87.20%**| 90.00% | 92.10% | 6.5 |
| Cohere Aya-Vision 32B | 48.20% | 31.50% | 50.10% | 63.04% | 72.40% | 82.57% | 4.1 |
### Multilingual Evals
| Model | Average | European | East Asian | Middle Eastern |
|--------------------------------|------------|------------|------------|----------------|
| **Small 3.1 24B Instruct** | **71.18%** | **75.30%** | **69.17%** | 69.08% |
| Gemma 3 27B IT | 70.19% | 74.14% | 65.65% | 70.76% |
| GPT4o Mini | 70.36% | 74.21% | 65.96% | **70.90%** |
| Claude 3.5 Haiku | 70.16% | 73.45% | 67.05% | 70.00% |
| Cohere Aya-Vision 32B | 62.15% | 64.70% | 57.61% | 64.12% |
### Long Context Evals
| Model | LongBench v2 | RULER 32K | RULER 128K |
|--------------------------------|-----------------|-------------|------------|
| **Small 3.1 24B Instruct** | **37.18%** | **93.96%** | 81.20% |
| Gemma 3 27B IT | 34.59% | 91.10% | 66.00% |
| GPT4o Mini | 29.30% | 90.20% | 65.8% |
| Claude 3.5 Haiku | 35.19% | 92.60% | **91.90%** |
## Basic Instruct Template (V7-Tekken)
```
<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]
```
*`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.*
***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth***
## Usage
The model can be used with the following frameworks;
- [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm)
**Note 1**: We recommend using a relatively low temperature, such as `temperature=0.15`.
**Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following
system prompt:
```
system_prompt = """You are Mistral Small 3.1, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
You power an AI assistant called Le Chat.
Your knowledge base was last updated on 2023-10-01.
The current date is {today}.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").
You are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.
You follow these instructions in all languages, and always respond to the user in the language they use or request.
Next sections describe the capabilities that you have.
# WEB BROWSING INSTRUCTIONS
You cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.
# MULTI-MODAL INSTRUCTIONS
You have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.
You cannot read nor transcribe audio files or videos."""
```
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**_Installation_**
Make sure you install [`vLLM nightly`](https://github.com/vllm-project/vllm/):
```
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly --upgrade
```
Doing so should automatically install [`mistral_common >= 1.5.4`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.4).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39) followed by a nightly install of vllm as shown above.
#### Server
We recommand that you use Mistral-Small-3.1-24B-Instruct-2503 in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Mistral-Small-3.1-24B-Instruct-2503 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --limit_mm_per_prompt 'image=10' --tensor-parallel-size 2
```
**Note:** Running Mistral-Small-3.1-24B-Instruct-2503 on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
2. To ping the client you can use a simple Python snippet.
```py
import requests
import json
from huggingface_hub import hf_hub_download
from datetime import datetime, timedelta
url = "http://<your-server-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
image_url = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/europe.png"
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Which of the depicted countries has the best food? Which the second and third and fourth? Name the country, its color on the map and one its city that is visible on the map, but is not the capital. Make absolutely sure to only name a city that can be seen on the map.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
data = {"model": model, "messages": messages, "temperature": 0.15}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
# Determining the "best" food is highly subjective and depends on personal preferences. However, based on general popularity and recognition, here are some countries known for their cuisine:
# 1. **Italy** - Color: Light Green - City: Milan
# - Italian cuisine is renowned worldwide for its pasta, pizza, and various regional specialties.
# 2. **France** - Color: Brown - City: Lyon
# - French cuisine is celebrated for its sophistication, including dishes like coq au vin, bouillabaisse, and pastries like croissants and éclairs.
# 3. **Spain** - Color: Yellow - City: Bilbao
# - Spanish cuisine offers a variety of flavors, from paella and tapas to jamón ibérico and churros.
# 4. **Greece** - Not visible on the map
# - Greek cuisine is known for dishes like moussaka, souvlaki, and baklava. Unfortunately, Greece is not visible on the provided map, so I cannot name a city.
# Since Greece is not visible on the map, I'll replace it with another country known for its good food:
# 4. **Turkey** - Color: Light Green (east part of the map) - City: Istanbul
# - Turkish cuisine is diverse and includes dishes like kebabs, meze, and baklava.
```
### Function calling
Mistral-Small-3.1-24-Instruct-2503 is excellent at function / tool calling tasks via vLLM. *E.g.:*
<details>
<summary>Example</summary>
```py
import requests
import json
from huggingface_hub import hf_hub_download
from datetime import datetime, timedelta
url = "http://<your-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to find the weather for, e.g. 'San Francisco'",
},
"state": {
"type": "string",
"description": "The state abbreviation, e.g. 'CA' for California",
},
"unit": {
"type": "string",
"description": "The unit for temperature",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["city", "state", "unit"],
},
},
},
{
"type": "function",
"function": {
"name": "rewrite",
"description": "Rewrite a given text for improved clarity",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The input text to rewrite",
}
},
},
},
},
]
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "bbc5b7ede",
"type": "function",
"function": {
"name": "rewrite",
"arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
},
}
],
},
{
"role": "tool",
"content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
"tool_call_id": "bbc5b7ede",
"name": "rewrite",
},
{
"role": "assistant",
"content": "---\n\nOpenAI is a FOR-profit company.",
},
{
"role": "user",
"content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?",
},
]
data = {"model": model, "messages": messages, "tools": tools, "temperature": 0.15}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["tool_calls"])
# [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}]
```
</details>
#### Offline
```py
from vllm import LLM
from vllm.sampling_params import SamplingParams
from datetime import datetime, timedelta
SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
messages = [
{
"role": "system",
"content": SYSTEM_PROMPT
},
{
"role": "user",
"content": user_prompt
},
]
model_name = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
# note that running this model on GPU requires over 60 GB of GPU RAM
llm = LLM(model=model_name, tokenizer_mode="mistral")
sampling_params = SamplingParams(max_tokens=512, temperature=0.15)
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# Here are five non-formal ways to say "See you later" in French:
# 1. **À plus tard** - Until later
# 2. **À toute** - See you soon (informal)
# 3. **Salut** - Bye (can also mean hi)
# 4. **À plus** - See you later (informal)
# 5. **Ciao** - Bye (informal, borrowed from Italian)
# ```
# /\_/\
# ( o.o )
# > ^ <
# ```
```
### Transformers (untested)
Transformers-compatible model weights are also uploaded (thanks a lot @cyrilvallez).
However the transformers implementation was **not throughly tested**, but only on "vibe-checks".
Hence, we can only ensure 100% correct behavior when using the original weight format with vllm (see above). |
ajagota71/toxicity-reward-model-max-margin-seed-200-pythia-160m | ajagota71 | "2025-05-09T21:15:10Z" | 0 | 0 | null | [
"safetensors",
"gpt_neox",
"region:us"
] | null | "2025-05-09T21:14:45Z" | # toxicity-reward-model-max-margin-seed-200-pythia-160m
This model was trained using max_margin IRL to learn toxicity reward signals.
Base model: EleutherAI/pythia-160m
Original model: EleutherAI/pythia-160M
Detoxified model: ajagota71/pythia-160m-detox-epoch-100
---
language: en
tags:
- toxicity
- reward-model
- irl
library_name: transformers
base_model: pythia-160m
pipeline_tag: text-classification
---
|
ajagota71/toxicity-reward-model-max-margin-seed-200-pythia-160m-checkpoint-50 | ajagota71 | "2025-05-09T21:14:39Z" | 0 | 0 | null | [
"safetensors",
"gpt_neox",
"region:us"
] | null | "2025-05-09T21:14:15Z" | # toxicity-reward-model-max-margin-seed-200-pythia-160m-checkpoint-50
This model was trained using max_margin IRL to learn toxicity reward signals.
Base model: EleutherAI/pythia-160m
Original model: EleutherAI/pythia-160M
Detoxified model: ajagota71/pythia-160m-detox-epoch-100
---
language: en
tags:
- toxicity
- reward-model
- irl
library_name: transformers
base_model: pythia-160m
pipeline_tag: text-classification
---
|
unsloth/Llama-3.2-3B-Instruct-GGUF | unsloth | "2025-05-09T21:14:17Z" | 41,605 | 37 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"en",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:quantized:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-09-25T19:47:33Z" | ---
base_model: meta-llama/Llama-3.2-3B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
## ***See [our collection](https://huggingface.co/collections/unsloth/llama-32-66f46afde4ca573864321a22) for all versions of Llama 3.2 including GGUF, 4-bit and original 16-bit formats.***
# GGUF uploads
16bit, 8bit, 6bit, 5bit, 4bit, 3bit and 2bit uploads avaliable.
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1T5-zKWM_5OD21QHwXHiV9ixTRR7k3iB9?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Llama-3.2-3B
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-3B)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
DeepMount00/test_gemma_v1 | DeepMount00 | "2025-05-09T21:14:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T21:12:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic | RedHatAI | "2025-05-09T21:14:00Z" | 2,815 | 6 | vllm | [
"vllm",
"safetensors",
"mistral3",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"FP8",
"image-text-to-text",
"conversational",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:quantized:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | image-text-to-text | "2025-03-27T02:50:44Z" | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
base_model:
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
pipeline_tag: image-text-to-text
tags:
- neuralmagic
- redhat
- llmcompressor
- quantized
- FP8
---
# Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic
## Model Overview
- **Model Architecture:** Mistral3ForConditionalGeneration
- **Input:** Text / Image
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** FP8
- **Weight quantization:** FP8
- **Intended Use Cases:** It is ideal for:
- Fast-response conversational agents.
- Low-latency function calling.
- Subject matter experts via fine-tuning.
- Local inference for hobbyists and organizations handling sensitive data.
- Programming and math reasoning.
- Long document understanding.
- Visual understanding.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages not officially supported by the model.
- **Release Date:** 04/15/2025
- **Version:** 1.0
- **Model Developers:** RedHat (Neural Magic)
### Model Optimizations
This model was obtained by quantizing activations and weights of [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) to FP8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%.
Only weights and activations of the linear operators within transformers blocks are quantized.
Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme.
The [llm-compressor](https://github.com/vllm-project/llm-compressor) library is used for quantization.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoProcessor
model_id = "RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
processor = AutoProcessor.from_pretrained(model_id)
messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]
prompts = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
<details>
<summary>Creation details</summary>
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForImageTextToText, AutoProcessor
# Load model
model_stub = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
model_name = model_stub.split("/")[-1]
model = AutoModelForImageTextToText.from_pretrained(model_stub)
processor = AutoProcessor.from_pretrained(model_stub)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
ignore=["language_model.lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
targets="Linear",
scheme="FP8_dynamic",
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
processor.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
</details>
## Evaluation
The model was evaluated on the OpenLLM leaderboard tasks (version 1), MMLU-pro, GPQA, HumanEval and MBPP.
Non-coding tasks were evaluated with [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), whereas coding tasks were evaluated with a fork of [evalplus](https://github.com/neuralmagic/evalplus).
[vLLM](https://docs.vllm.ai/en/stable/) is used as the engine in all cases.
<details>
<summary>Evaluation details</summary>
**MMLU**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks mmlu \
--num_fewshot 5 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**ARC Challenge**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks arc_challenge \
--num_fewshot 25 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**GSM8k**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.9,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks gsm8k \
--num_fewshot 8 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**Hellaswag**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks hellaswag \
--num_fewshot 10 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**Winogrande**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks winogrande \
--num_fewshot 5 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**TruthfulQA**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks truthfulqa \
--num_fewshot 0 \
--apply_chat_template\
--batch_size auto
```
**MMLU-pro**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks mmlu_pro \
--num_fewshot 5 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**Coding**
The commands below can be used for mbpp by simply replacing the dataset name.
*Generation*
```
python3 codegen/generate.py \
--model RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval
```
*Sanitization*
```
python3 evalplus/sanitize.py \
humaneval/RedHatAI--Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic_vllm_temp_0.2
```
*Evaluation*
```
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/RedHatAI--Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic_vllm_temp_0.2-sanitized
```
</details>
### Accuracy
<table>
<tr>
<th>Category
</th>
<th>Benchmark
</th>
<th>Mistral-Small-3.1-24B-Instruct-2503
</th>
<th>Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic<br>(this model)
</th>
<th>Recovery
</th>
</tr>
<tr>
<td rowspan="7" ><strong>OpenLLM v1</strong>
</td>
<td>MMLU (5-shot)
</td>
<td>80.67
</td>
<td>80.71
</td>
<td>100.1%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>72.78
</td>
<td>72.87
</td>
<td>100.1%
</td>
</tr>
<tr>
<td>GSM-8K (5-shot, strict-match)
</td>
<td>58.68
</td>
<td>49.96
</td>
<td>85.1%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>83.70
</td>
<td>83.67
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>83.74
</td>
<td>82.56
</td>
<td>98.6%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>70.62
</td>
<td>70.88
</td>
<td>100.4%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>75.03</strong>
</td>
<td><strong>73.49</strong>
</td>
<td><strong>97.9%</strong>
</td>
</tr>
<tr>
<td rowspan="3" ><strong></strong>
</td>
<td>MMLU-Pro (5-shot)
</td>
<td>67.25
</td>
<td>66.86
</td>
<td>99.4%
</td>
</tr>
<tr>
<td>GPQA CoT main (5-shot)
</td>
<td>42.63
</td>
<td>41.07
</td>
<td>99.4%
</td>
</tr>
<tr>
<td>GPQA CoT diamond (5-shot)
</td>
<td>45.96
</td>
<td>45.45
</td>
<td>98.9%
</td>
</tr>
<tr>
<td rowspan="4" ><strong>Coding</strong>
</td>
<td>HumanEval pass@1
</td>
<td>84.70
</td>
<td>84.70
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>HumanEval+ pass@1
</td>
<td>79.50
</td>
<td>79.30
</td>
<td>99.8%
</td>
</tr>
<tr>
<td>MBPP pass@1
</td>
<td>71.10
</td>
<td>70.00
</td>
<td>98.5%
</td>
</tr>
<tr>
<td>MBPP+ pass@1
</td>
<td>60.60
</td>
<td>59.50
</td>
<td>98.2%
</td>
</tr>
</table>
|
dejanseo/chrome_models | dejanseo | "2025-05-09T21:09:57Z" | 0 | 7 | null | [
"tflite",
"TensorFlow Lite v3",
"region:us"
] | null | "2024-11-07T00:35:28Z" | ---
tags:
- TensorFlow Lite v3
---
# A Collection of Google's On-Device Models
## Help us complete the list
- To contribute go to C:\Users\YOUR_PC_USER\AppData\Local\Google\Chrome\User Data\optimization_guide_model_store
- If you find a new non-empty folder not listed [here](https://huggingface.co/dejanseo/chrome_models/upload/main) please [upload it to this repo](https://huggingface.co/dejanseo/chrome_models/upload/main)
## List of All Available Models
Following is the complete list of machine learning models in Chrome many of which are on your device. They are located in your User Data folder and you can easily check to see which ones you have as they are all in numbered folders.
# Mapping of folder names to optimization target descriptions
```
OPTIMIZATION_TARGETS = {
"0": "UNKNOWN",
"1": "PAINFUL_PAGE_LOAD",
"2": "LANGUAGE_DETECTION",
"3": "PAGE_TOPICS",
"4": "SEGMENTATION_NEW_TAB",
"5": "SEGMENTATION_SHARE",
"6": "SEGMENTATION_VOICE",
"7": "MODEL_VALIDATION",
"8": "PAGE_ENTITIES",
"9": "NOTIFICATION_PERMISSION_PREDICTIONS",
"10": "SEGMENTATION_DUMMY",
"11": "SEGMENTATION_CHROME_START_ANDROID",
"12": "SEGMENTATION_QUERY_TILES",
"13": "PAGE_VISIBILITY",
"15": "PAGE_TOPICS_V2",
"16": "SEGMENTATION_CHROME_LOW_USER_ENGAGEMENT",
"17": "SEGMENTATION_FEED_USER",
"18": "CONTEXTUAL_PAGE_ACTION_PRICE_TRACKING",
"19": "TEXT_CLASSIFIER",
"20": "GEOLOCATION_PERMISSION_PREDICTIONS",
"21": "SEGMENTATION_SHOPPING_USER",
"22": "SEGMENTATION_CHROME_START_ANDROID_V2",
"23": "SEGMENTATION_SEARCH_USER",
"24": "OMNIBOX_ON_DEVICE_TAIL_SUGGEST",
"25": "CLIENT_SIDE_PHISHING",
"26": "OMNIBOX_URL_SCORING",
"27": "SEGMENTATION_DEVICE_SWITCHER",
"28": "SEGMENTATION_ADAPTIVE_TOOLBAR",
"29": "SEGMENTATION_TABLET_PRODUCTIVITY_USER",
"30": "CLIENT_SIDE_PHISHING_IMAGE_EMBEDDER",
"31": "NEW_TAB_PAGE_HISTORY_CLUSTERS_MODULE_RANKING",
"32": "WEB_APP_INSTALLATION_PROMO",
"33": "TEXT_EMBEDDER",
"34": "VISUAL_SEARCH_CLASSIFICATION",
"35": "SEGMENTATION_BOTTOM_TOOLBAR",
"36": "AUTOFILL_FIELD_CLASSIFICATION",
"37": "SEGMENTATION_IOS_MODULE_RANKER",
"38": "SEGMENTATION_DESKTOP_NTP_MODULE",
"39": "PRELOADING_HEURISTICS",
"40": "TEXT_SAFETY",
"41": "SEGMENTATION_ANDROID_HOME_MODULE_RANKER",
"42": "COMPOSE",
"43": "PASSAGE_EMBEDDER",
"44": "PHRASE_SEGMENTATION",
"45": "SEGMENTATION_COMPOSE_PROMOTION",
"46": "URL_VISIT_RESUMPTION_RANKER",
"47": "CAMERA_BACKGROUND_SEGMENTATION",
"48": "MODEL_EXECUTION_FEATURE_HISTORY_SEARCH",
"49": "MODEL_EXECUTION_FEATURE_PROMPT_API",
"50": "SEGMENTATION_METRICS_CLUSTERING",
"51": "MODEL_EXECUTION_FEATURE_SUMMARIZE",
"52": "PASSWORD_MANAGER_FORM_CLASSIFICATION",
"53": "NOTIFICATION_CONTENT_DETECTION",
"54": "MODEL_EXECUTION_FEATURE_HISTORY_QUERY_INTENT",
"55": "MODEL_EXECUTION_FEATURE_SCAM_DETECTION",
"56": "MODEL_EXECUTION_FEATURE_PERMISSIONS_AI",
"57": "EXPERIMENTAL_EMBEDDER",
"58": "SEGMENTATION_FEDCM_USER",
"59": "MODEL_EXECUTION_FEATURE_WRITING_ASSISTANCE_API"
}
```
Source: [DEJAN](https://dejan.ai/blog/chrome-ai-models/) |
nnilayy/dreamer-arousal-binary-classification-Kfold-5 | nnilayy | "2025-05-09T21:09:46Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T21:09:44Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
RogerKoala/gen1-pokemon-classifier | RogerKoala | "2025-05-09T21:04:12Z" | 13 | 0 | keras | [
"keras",
"TensorFlow",
"Keras",
"Pokedex",
"Image-Classification",
"image-classification",
"en",
"dataset:RogerKoala/gen1-pokemon-images",
"base_model:keras/densenet_201_imagenet",
"base_model:finetune:keras/densenet_201_imagenet",
"license:mit",
"region:us"
] | image-classification | "2025-04-21T03:58:57Z" | ---
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: image-classification
tags:
- TensorFlow
- Keras
- Pokedex
- Image-Classification
datasets:
- RogerKoala/gen1-pokemon-images
base_model:
- keras/densenet_201_imagenet
library_name: keras
---
<style>
img {
display: inline;
}
</style>
[](#google-colab-notebook)
| [](#demo)
# Model Summary
* Architecture: DenseNet121.
* Accuracy: 96% on the test set.
* Framework: Tensorflow.
## Usage
```
import keras
from keras.src.utils import load_img
from keras.src.applications.densenet import preprocess_input
import numpy as np
pokedex = keras.saving.load_model("pokedex.keras")
image = load_img('image.png', target_size=(224, 224))
x = keras.utils.img_to_array(image)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = pokedex.predict(x)
# Important: You must ensure that the Pokémon are ordered alphabetically, as the model was trained using this sequence. You can obtain the .txt file from the following link: https://huggingface.co/spaces/RogerKoala/Pokedex/blob/main/Pokemons.txt
with open('Pokemons.txt', 'r') as f:
class_labels = f.read().splitlines()
top_indices = preds[0].argsort()[-3:][::-1]
for i in top_indices:
print(f"{class_labels[i]}: {preds[0][i]*100:.2f}%")
```
## System
- Input reqs: 224×224×3 RGB, normalized.
- Downstream deps: Class index→Pokémon metadata lookup.
## Implementation requirements
- Training: T4 Google Colab.
- Duration: 1 hour and 15 minutes.
# Model Characteristics
## Model initialization
- Fine-tuned from ImageNet DenseNet121.
## Model stats
| Layer (type) | Output Shape | Param # |
|------------------------------|----------------------|-------------|
| densenet121 (Functional) | (None, 7, 7, 1024) | 7,037,504 |
| global_average_pooling2d (GlobalAveragePooling2D) | (None, 1024) | 0 |
| dense (Dense) | (None, 128) | 131,200 |
| dropout (Dropout) | (None, 128) | 0 |
| dense_1 (Dense) | (None, 151) | 19,479 |
**Total params:** 21,397,255 (81.62 MB) <br>
**Trainable params:** 7,104,535 (27.10 MB) <br>
**Non-trainable params:** 83,648 (326.75 KB) <br>
**Optimizer params:** 14,209,072 (54.20 MB) <br>
## Training data
- Collected via scripts from public archives.
- Pre-processing: resize to 224×224, normalize.
## Evaluation data
- Train: 2249 images.
- Test: 840 images.
# Evaluation Results
## Summary
Test accuracy: 96%
## Usage limitations
- Only original 151.
- Fails on non-canonical art styles, low light, occlusion.
## Google Colab Notebook
Explore the model training and inference workflow in this interactive notebook:
* [Pokédex Colab Notebook](https://colab.research.google.com/drive/1IeGzndeOZ_9PnnWSnLjYgaS7QFn-4Voc?usp=sharing)
## Demo
Visit the live Space to try it out:
* [Hugging Face Space](https://huggingface.co/spaces/RogerKoala/Pokedex) |
logicalqubit/deberta-v3-large-news-classifier | logicalqubit | "2025-05-09T21:03:18Z" | 337 | 1 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"en",
"dataset:logicalqubit/news_133k",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-12T03:12:29Z" | ---
license: mit
language:
- en
base_model:
- microsoft/deberta-v3-large
pipeline_tag: text-classification
library_name: transformers
datasets:
- logicalqubit/news_133k
---
# DeBERTa v3 Large News Classifier
This model is a fine-tuned version of [`microsoft/deberta-v3-large`](https://huggingface.co/microsoft/deberta-v3-large) for **multi-class classification of news headlines**.
The model classifies headlines into one of **7 categories**:
- World
- Business
- Technology
- Entertainment
- Sports
- Science
- Health
---
## Training Dataset
This model was trained on the [`logicalqubit/news_133k`](https://huggingface.co/datasets/logicalqubit/news_133k) dataset,
which contains 133,000 labeled news headlines across the 7 categories mentioned above.
---
## Training Hyperparameters:
- learning_rate: 6e-6
- train_batch_size: 8
- eval_batch_size: 8
- warmup_steps: 50
- num_epochs: 2
- report_to: wandb
---
## Sample Code

---
## Sample Output

---
## Metrics
- While zero-shot models and fine-tuned text classification model differ fundamentally in design and purpose, this comparison is presented for reference.
It provides a practical benchmark, even if the comparison is entirely unfair.


---
## Wandb


--- |
ajagota71/toxicity-reward-model-max-margin-seed-100-pythia-160m | ajagota71 | "2025-05-09T21:02:38Z" | 0 | 0 | null | [
"safetensors",
"gpt_neox",
"region:us"
] | null | "2025-05-09T21:02:15Z" | # toxicity-reward-model-max-margin-seed-100-pythia-160m
This model was trained using max_margin IRL to learn toxicity reward signals.
Base model: EleutherAI/pythia-160m
Original model: EleutherAI/pythia-160M
Detoxified model: ajagota71/pythia-160m-detox-epoch-100
---
language: en
tags:
- toxicity
- reward-model
- irl
library_name: transformers
base_model: pythia-160m
pipeline_tag: text-classification
---
|
ajagota71/toxicity-reward-model-max-margin-seed-100-pythia-160m-checkpoint-50 | ajagota71 | "2025-05-09T21:02:08Z" | 0 | 0 | null | [
"safetensors",
"gpt_neox",
"region:us"
] | null | "2025-05-09T21:01:40Z" | # toxicity-reward-model-max-margin-seed-100-pythia-160m-checkpoint-50
This model was trained using max_margin IRL to learn toxicity reward signals.
Base model: EleutherAI/pythia-160m
Original model: EleutherAI/pythia-160M
Detoxified model: ajagota71/pythia-160m-detox-epoch-100
---
language: en
tags:
- toxicity
- reward-model
- irl
library_name: transformers
base_model: pythia-160m
pipeline_tag: text-classification
---
|
Grayx/jpii_24 | Grayx | "2025-05-09T20:57:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T20:55:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
netcat420/qwen3-1.7b-MFANN-slerp-Q4_0-GGUF | netcat420 | "2025-05-09T20:53:07Z" | 0 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Qwen/Qwen3-1.7B-FP8",
"netcat420/qwen3-MFANN-1.7b",
"llama-cpp",
"gguf-my-repo",
"base_model:netcat420/qwen3-1.7b-MFANN-slerp",
"base_model:quantized:netcat420/qwen3-1.7b-MFANN-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-09T20:52:59Z" | ---
base_model: netcat420/qwen3-1.7b-MFANN-slerp
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Qwen/Qwen3-1.7B-FP8
- netcat420/qwen3-MFANN-1.7b
- llama-cpp
- gguf-my-repo
---
# netcat420/qwen3-1.7b-MFANN-slerp-Q4_0-GGUF
This model was converted to GGUF format from [`netcat420/qwen3-1.7b-MFANN-slerp`](https://huggingface.co/netcat420/qwen3-1.7b-MFANN-slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/netcat420/qwen3-1.7b-MFANN-slerp) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo netcat420/qwen3-1.7b-MFANN-slerp-Q4_0-GGUF --hf-file qwen3-1.7b-mfann-slerp-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo netcat420/qwen3-1.7b-MFANN-slerp-Q4_0-GGUF --hf-file qwen3-1.7b-mfann-slerp-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo netcat420/qwen3-1.7b-MFANN-slerp-Q4_0-GGUF --hf-file qwen3-1.7b-mfann-slerp-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo netcat420/qwen3-1.7b-MFANN-slerp-Q4_0-GGUF --hf-file qwen3-1.7b-mfann-slerp-q4_0.gguf -c 2048
```
|
rewicks/lidirl-lstm-aug | rewicks | "2025-05-09T20:52:25Z" | 10 | 0 | null | [
"safetensors",
"LidirlLSTM",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-classification",
"custom_code",
"license:mit",
"region:us"
] | text-classification | "2025-05-07T17:19:34Z" | ---
license: mit
pipeline_tag: text-classification
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
irenemurua/roberta-emotion-model | irenemurua | "2025-05-09T20:49:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-05-09T20:49:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
netcat420/qwen3-1.7b-MFANN-slerp | netcat420 | "2025-05-09T20:49:12Z" | 0 | 0 | null | [
"safetensors",
"qwen3",
"merge",
"mergekit",
"lazymergekit",
"Qwen/Qwen3-1.7B-FP8",
"netcat420/qwen3-MFANN-1.7b",
"license:apache-2.0",
"fp8",
"region:us"
] | null | "2025-05-09T20:48:14Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Qwen/Qwen3-1.7B-FP8
- netcat420/qwen3-MFANN-1.7b
---
# qwen3-1.7b-MFANN-slerp
qwen3-1.7b-MFANN-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Qwen/Qwen3-1.7B-FP8](https://huggingface.co/Qwen/Qwen3-1.7B-FP8)
* [netcat420/qwen3-MFANN-1.7b](https://huggingface.co/netcat420/qwen3-MFANN-1.7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Qwen/Qwen3-1.7B-FP8
layer_range: [0, 28]
- model: netcat420/qwen3-MFANN-1.7b
layer_range: [0, 28]
merge_method: slerp
base_model: Qwen/Qwen3-1.7B-FP8
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
``` |
infoipman/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tall_mammalian_caribou | infoipman | "2025-05-09T20:49:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tall mammalian caribou",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-05-02T15:18:14Z" | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tall_mammalian_caribou
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tall mammalian caribou
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tall_mammalian_caribou
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="infoipman/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tall_mammalian_caribou", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tarro-ml/wav2vec2-base-wonders-phonemes | tarro-ml | "2025-05-09T20:48:42Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-05-05T18:48:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shanchen/ds-limo-1.1-50 | shanchen | "2025-05-09T20:48:20Z" | 146 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-28T17:48:15Z" | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
library_name: transformers
model_name: ds-limo-1.1-50
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for ds-limo-1.1-50
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shanchen/ds-limo-1.1-50", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bitterman/s1/runs/td9ebxoz)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Oysiyl/colSmol-256M_ufo | Oysiyl | "2025-05-09T20:47:52Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vidore/ColSmolVLM-Instruct-256M-base",
"base_model:adapter:vidore/ColSmolVLM-Instruct-256M-base",
"license:mit",
"region:us"
] | null | "2025-05-09T20:47:49Z" | ---
library_name: peft
license: mit
base_model: vidore/ColSmolVLM-Instruct-256M-base
tags:
- generated_from_trainer
model-index:
- name: colSmol-256M_ufo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# colSmol-256M_ufo
This model is a fine-tuned version of [vidore/ColSmolVLM-Instruct-256M-base](https://huggingface.co/vidore/ColSmolVLM-Instruct-256M-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1859 | 0.1636 | 80 | 0.1853 |
| 0.0961 | 0.3272 | 160 | 0.1389 |
| 0.1083 | 0.4908 | 240 | 0.1148 |
| 0.0677 | 0.6544 | 320 | 0.1058 |
| 0.0453 | 0.8180 | 400 | 0.1046 |
| 0.0693 | 0.9816 | 480 | 0.1032 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.3.1
- Tokenizers 0.21.0 |
nnilayy/dreamer-dominance-binary-classification-Kfold-4 | nnilayy | "2025-05-09T20:47:01Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T20:47:00Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
mradermacher/Gaia-LLM-4B-GGUF | mradermacher | "2025-05-09T20:43:14Z" | 160 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:my2000cup/Gaia-LLM-4B",
"base_model:quantized:my2000cup/Gaia-LLM-4B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T23:20:39Z" | ---
base_model: my2000cup/Gaia-LLM-4B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/my2000cup/Gaia-LLM-4B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gaia-LLM-4B-GGUF/resolve/main/Gaia-LLM-4B.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF | mradermacher | "2025-05-09T20:41:35Z" | 324 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"en",
"dataset:Neelectric/OpenR1-Math-220k_CN-K12_OLMo-2_4096toks",
"base_model:Neelectric/OLMo-2-1124-7B-Instruct_GRPOv02.05",
"base_model:quantized:Neelectric/OLMo-2-1124-7B-Instruct_GRPOv02.05",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-07T21:45:59Z" | ---
base_model: Neelectric/OLMo-2-1124-7B-Instruct_GRPOv02.05
datasets: Neelectric/OpenR1-Math-220k_CN-K12_OLMo-2_4096toks
language:
- en
library_name: transformers
model_name: OLMo-2-1124-7B-Instruct_GRPOv02.05
quantized_by: mradermacher
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Neelectric/OLMo-2-1124-7B-Instruct_GRPOv02.05
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ1_S.gguf) | i1-IQ1_S | 1.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ1_M.gguf) | i1-IQ1_M | 2.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ2_S.gguf) | i1-IQ2_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ2_M.gguf) | i1-IQ2_M | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q2_K.gguf) | i1-Q2_K | 3.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ3_S.gguf) | i1-IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ3_M.gguf) | i1-IQ3_M | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q4_0.gguf) | i1-Q4_0 | 4.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv02.05-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv02.05.i1-Q6_K.gguf) | i1-Q6_K | 6.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Symbiotic-8B-i1-GGUF | mradermacher | "2025-05-09T20:40:57Z" | 293 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"8b",
"qwen3-8b",
"symbiotic",
"symbtioicai",
"en",
"dataset:0xZee/dataset-CoT-Advanced-Calculus-268",
"base_model:reaperdoesntknow/Symbiotic-8B",
"base_model:quantized:reaperdoesntknow/Symbiotic-8B",
"license:afl-3.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-08T05:16:23Z" | ---
base_model: reaperdoesntknow/Symbiotic-8B
datasets:
- 0xZee/dataset-CoT-Advanced-Calculus-268
language:
- en
library_name: transformers
license: afl-3.0
quantized_by: mradermacher
tags:
- qwen3
- 8b
- qwen3-8b
- symbiotic
- symbtioicai
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/reaperdoesntknow/Symbiotic-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Symbiotic-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Symbiotic-8B-i1-GGUF/resolve/main/Symbiotic-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/KYRA-1.0X-Horizon-GGUF | mradermacher | "2025-05-09T20:40:11Z" | 44 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"LLM",
"PyTorch",
"unsloth",
"code",
"Qwen",
"Qwen2.5",
"reasoning",
"general-intelligence",
"programming",
"avern",
"uk",
"en",
"base_model:averntech/KYRA-1.0X-Horizon",
"base_model:quantized:averntech/KYRA-1.0X-Horizon",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-05-08T22:20:23Z" | ---
base_model: averntech/KYRA-1.0X-Horizon
language:
- en
library_name: transformers
license: mit
model_name: Avern Prism 1.0X
no_imatrix: '[42]7.1747,[43]7.3397,nan detected in blk.47.attn_q.weight'
quantized_by: mradermacher
tags:
- text-generation
- LLM
- PyTorch
- unsloth
- code
- Qwen
- Qwen2.5
- reasoning
- general-intelligence
- programming
- avern
- uk
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/averntech/KYRA-1.0X-Horizon
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/KYRA-1.0X-Horizon-GGUF/resolve/main/KYRA-1.0X-Horizon.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ASethi04/google-gemma-2-9b-hellaswag-first-lora-4-1e-05 | ASethi04 | "2025-05-09T20:39:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T15:31:11Z" | ---
base_model: google/gemma-2-9b
library_name: transformers
model_name: google-gemma-2-9b-hellaswag-first-lora-4-1e-05
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for google-gemma-2-9b-hellaswag-first-lora-4-1e-05
This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/google-gemma-2-9b-hellaswag-first-lora-4-1e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/3rkfpfcz)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/gemma-2b-fine-tuned-math-i1-GGUF | mradermacher | "2025-05-09T20:38:57Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:hjskhan/gemma-2b-fine-tuned-math",
"base_model:quantized:hjskhan/gemma-2b-fine-tuned-math",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-05-09T19:00:09Z" | ---
base_model: hjskhan/gemma-2b-fine-tuned-math
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/hjskhan/gemma-2b-fine-tuned-math
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q4_1.gguf) | i1-Q4_1 | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q6_K.gguf) | i1-Q6_K | 2.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF | mradermacher | "2025-05-09T20:38:53Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:stiucsib/phi3_lora_sft-mathinstruct-lima",
"base_model:quantized:stiucsib/phi3_lora_sft-mathinstruct-lima",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-09T20:00:12Z" | ---
base_model: stiucsib/phi3_lora_sft-mathinstruct-lima
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/stiucsib/phi3_lora_sft-mathinstruct-lima
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ3_M.gguf) | i1-IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q4_0.gguf) | i1-Q4_0 | 2.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q4_1.gguf) | i1-Q4_1 | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_sft-mathinstruct-lima-i1-GGUF/resolve/main/phi3_lora_sft-mathinstruct-lima.i1-Q6_K.gguf) | i1-Q6_K | 3.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/primary-school-math-question-i1-GGUF | mradermacher | "2025-05-09T20:38:41Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"setfit",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"en",
"base_model:serdarcaglar/primary-school-math-question",
"base_model:quantized:serdarcaglar/primary-school-math-question",
"endpoints_compatible",
"region:us",
"imatrix",
"feature-extraction"
] | text-classification | "2025-05-09T20:04:45Z" | ---
base_model: serdarcaglar/primary-school-math-question
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/serdarcaglar/primary-school-math-question
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/primary-school-math-question-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ3_S.gguf) | i1-IQ3_S | 0.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q2_K.gguf) | i1-Q2_K | 0.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ3_M.gguf) | i1-IQ3_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q4_0.gguf) | i1-Q4_0 | 0.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q4_1.gguf) | i1-Q4_1 | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/primary-school-math-question-i1-GGUF/resolve/main/primary-school-math-question.i1-Q6_K.gguf) | i1-Q6_K | 0.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf | RichardErkhov | "2025-05-09T20:36:30Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-09T19:55:22Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-8b-math-sft-subtask-1 - GGUF
- Model creator: https://huggingface.co/Dynosaur/
- Original model: https://huggingface.co/Dynosaur/llama3-8b-math-sft-subtask-1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-8b-math-sft-subtask-1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama3-8b-math-sft-subtask-1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama3-8b-math-sft-subtask-1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama3-8b-math-sft-subtask-1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama3-8b-math-sft-subtask-1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama3-8b-math-sft-subtask-1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama3-8b-math-sft-subtask-1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama3-8b-math-sft-subtask-1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama3-8b-math-sft-subtask-1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama3-8b-math-sft-subtask-1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama3-8b-math-sft-subtask-1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama3-8b-math-sft-subtask-1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [llama3-8b-math-sft-subtask-1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q4_K.gguf) | Q4_K | 4.58GB |
| [llama3-8b-math-sft-subtask-1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [llama3-8b-math-sft-subtask-1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q4_1.gguf) | Q4_1 | 4.78GB |
| [llama3-8b-math-sft-subtask-1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q5_0.gguf) | Q5_0 | 5.21GB |
| [llama3-8b-math-sft-subtask-1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [llama3-8b-math-sft-subtask-1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama3-8b-math-sft-subtask-1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama3-8b-math-sft-subtask-1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama3-8b-math-sft-subtask-1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama3-8b-math-sft-subtask-1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Dynosaur_-_llama3-8b-math-sft-subtask-1-gguf/blob/main/llama3-8b-math-sft-subtask-1.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
license: llama3
base_model: Dynosaur/llama3-8b-math-sft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- Dynosaur/math-sft-subtask-1
model-index:
- name: llama3-8b-math-sft-subtask-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-math-sft-subtask-1
This model is a fine-tuned version of [Dynosaur/llama3-8b-math-sft](https://huggingface.co/Dynosaur/llama3-8b-math-sft) on the Dynosaur/math-sft-subtask-1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Kort/xtest1 | Kort | "2025-05-09T20:36:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T20:20:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cybershiptrooper/llama3-70b-badllama-unquantized-merged | cybershiptrooper | "2025-05-09T20:32:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T17:36:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Reeveg16/results | Reeveg16 | "2025-05-09T20:31:47Z" | 0 | 0 | null | [
"pytorch",
"distilbert",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | "2025-05-09T16:10:48Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6668
- Accuracy: 0.4895
- F1: 0.3911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 36 | 1.9663 | 0.3497 | 0.2125 |
| No log | 2.0 | 72 | 1.8290 | 0.3706 | 0.2317 |
| 1.8015 | 3.0 | 108 | 1.7294 | 0.4266 | 0.3157 |
| 1.8015 | 4.0 | 144 | 1.6881 | 0.4755 | 0.3786 |
| 1.8015 | 5.0 | 180 | 1.6668 | 0.4895 | 0.3911 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.13.3
|
mariana398/Mariana | mariana398 | "2025-05-09T20:30:39Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-05-08T11:17:01Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
juhw/q4102 | juhw | "2025-05-09T20:27:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T20:24:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Fatma5000/legel-deepseek_summurize | Fatma5000 | "2025-05-09T20:26:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T20:26:38Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
library_name: transformers
model_name: legel-deepseek_summurize
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for legel-deepseek_summurize
This model is a fine-tuned version of [unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Fatma5000/legel-deepseek_summurize", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.1
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
phospho-app/Severin35-gr00t-test-tournevis4-ogteqkq8a4 | phospho-app | "2025-05-09T20:25:39Z" | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | "2025-05-09T19:48:05Z" |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [Severin35/test-tournevis4](https://huggingface.co/datasets/Severin35/test-tournevis4)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
arielcerdap/gemma-3-1b-it-disfluency-adapter-v1 | arielcerdap | "2025-05-09T20:17:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T20:17:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jyp96/teapot | jyp96 | "2025-05-09T20:16:07Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T08:19:47Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks teapot
widget:
- text: A photo of sks teapot in a bucket
output:
url: image_0.png
- text: A photo of sks teapot in a bucket
output:
url: image_1.png
- text: A photo of sks teapot in a bucket
output:
url: image_2.png
- text: A photo of sks teapot in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/teapot
<Gallery />
## Model description
These are jyp96/teapot DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks teapot` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/teapot/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/teapot', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks teapot in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/teapot/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
liangli217/simple_genomics_model_first_attempt | liangli217 | "2025-05-09T20:15:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T20:15:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
baby-dev/d078695c-2933-47b0-80e8-77e4057ffeb4 | baby-dev | "2025-05-09T20:13:32Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"region:us"
] | null | "2025-05-09T20:12:49Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Qwen/Qwen2.5-14B-Instruct
model-index:
- name: baby-dev/d078695c-2933-47b0-80e8-77e4057ffeb4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/d078695c-2933-47b0-80e8-77e4057ffeb4
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
nnilayy/dreamer-valence-binary-classification-Kfold-4 | nnilayy | "2025-05-09T20:12:08Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T20:12:07Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
Taimoor4477/rephraserHumanizerModelFineTunedPraphraser233509052025 | Taimoor4477 | "2025-05-09T20:10:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-05-09T20:10:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mwalmsley/baseline-encoder-regression-maxvit_base | mwalmsley | "2025-05-09T20:09:49Z" | 0 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"license:apache-2.0",
"region:us"
] | image-classification | "2025-05-09T20:09:28Z" | ---
tags:
- image-classification
- timm
- transformers
library_name: timm
license: apache-2.0
---
# Model card for baseline-encoder-regression-maxvit_base
|
mwalmsley/baseline-encoder-regression-convnext_large | mwalmsley | "2025-05-09T20:09:12Z" | 0 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"license:apache-2.0",
"region:us"
] | image-classification | "2025-05-09T20:08:36Z" | ---
tags:
- image-classification
- timm
- transformers
library_name: timm
license: apache-2.0
---
# Model card for baseline-encoder-regression-convnext_large
|
virensahajwala/model | virensahajwala | "2025-05-09T20:08:18Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-09T20:04:22Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** virensahajwala
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
juhx/qq800 | juhx | "2025-05-09T20:05:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T20:01:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shreenithi20/flux_lora_comic_style | shreenithi20 | "2025-05-09T20:05:19Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-09T04:22:21Z" | ---
license: apache-2.0
---
|
GeorgyGUF/flux-lettering | GeorgyGUF | "2025-05-09T20:00:18Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-05-07T20:04:07Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'Flux_00162_.png'
output:
url: Flux_00162_.png
- text: 'Flux_00273_.png'
output:
url: Flux_00273_.png
- text: 'letters-000001_sample_00.png'
output:
url: letters-000001_sample_00.png
- text: 'letters-000001_sample_01.png'
output:
url: letters-000001_sample_01.png
- text: 'letters-000001_sample_02.png'
output:
url: letters-000001_sample_02.png
- text: 'rgthree.compare._temp_pnymd_00163_.png'
output:
url: rgthree.compare._temp_pnymd_00163_.png
base_model: black-forest-labs/FLUX.1-dev
---
source: https://civitai.com/models/1553799/flux-lettering |
jyp96/robot_toy | jyp96 | "2025-05-09T19:58:24Z" | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T07:59:40Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks robot_toy
widget:
- text: A photo of sks robot_toy in a bucket
output:
url: image_0.png
- text: A photo of sks robot_toy in a bucket
output:
url: image_1.png
- text: A photo of sks robot_toy in a bucket
output:
url: image_2.png
- text: A photo of sks robot_toy in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/robot_toy
<Gallery />
## Model description
These are jyp96/robot_toy DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks robot_toy` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/robot_toy/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/robot_toy', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks robot_toy in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/robot_toy/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
nnilayy/dreamer-valence-multi-classification-Kfold-5 | nnilayy | "2025-05-09T19:57:23Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T19:57:21Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
nnilayy/dreamer-dominance-binary-classification-Kfold-3 | nnilayy | "2025-05-09T19:55:58Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T19:55:55Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
MinaMila/llama_instbase_unlearned_1_0_0_lr1e-5 | MinaMila | "2025-05-09T19:54:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T15:39:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Narine21/Horse | Narine21 | "2025-05-09T19:52:52Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-09T19:52:52Z" | ---
license: apache-2.0
---
|
rusuanjun/rl_course_vizdoom_health_gathering_supreme | rusuanjun | "2025-05-09T19:51:53Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-05-09T19:27:26Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.16 +/- 5.74
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r rusuanjun/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mlfoundations-dev/openr1_codeforces_0.3k | mlfoundations-dev | "2025-05-09T19:50:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T18:29:28Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openr1_codeforces_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openr1_codeforces_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openr1_codeforces_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
jyp96/red_cartoon | jyp96 | "2025-05-09T19:49:48Z" | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T07:44:02Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks red_cartoon
widget:
- text: A photo of sks red_cartoon in a bucket
output:
url: image_0.png
- text: A photo of sks red_cartoon in a bucket
output:
url: image_1.png
- text: A photo of sks red_cartoon in a bucket
output:
url: image_2.png
- text: A photo of sks red_cartoon in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/red_cartoon
<Gallery />
## Model description
These are jyp96/red_cartoon DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks red_cartoon` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/red_cartoon/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/red_cartoon', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks red_cartoon in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/red_cartoon/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
shanchen/ds-limo-te-50 | shanchen | "2025-05-09T19:49:37Z" | 164 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-28T17:14:36Z" | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
library_name: transformers
model_name: ds-limo-te-50
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for ds-limo-te-50
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shanchen/ds-limo-te-50", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bitterman/s1/runs/e9h7qvh8)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF | bartowski | "2025-05-09T19:45:57Z" | 0 | 2 | null | [
"gguf",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"text-generation",
"en",
"base_model:Gryphe/Pantheon-Proto-RP-1.8-30B-A3B",
"base_model:quantized:Gryphe/Pantheon-Proto-RP-1.8-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2025-05-09T16:54:55Z" | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: Gryphe/Pantheon-Proto-RP-1.8-30B-A3B
license: apache-2.0
base_model_relation: quantized
tags:
- instruct
- finetune
- chatml
- axolotl
- roleplay
language:
- en
---
## Llamacpp imatrix Quantizations of Pantheon-Proto-RP-1.8-30B-A3B by Gryphe
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5328">b5328</a> for quantization.
Original model: https://huggingface.co/Gryphe/Pantheon-Proto-RP-1.8-30B-A3B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Pantheon-Proto-RP-1.8-30B-A3B-bf16.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/tree/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-bf16) | bf16 | 61.10GB | true | Full BF16 weights. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q8_0.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q8_0.gguf) | Q8_0 | 32.48GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q6_K_L.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q6_K_L.gguf) | Q6_K_L | 25.26GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q6_K.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q6_K.gguf) | Q6_K | 25.10GB | false | Very high quality, near perfect, *recommended*. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q5_K_L.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q5_K_L.gguf) | Q5_K_L | 21.94GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q5_K_M.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q5_K_M.gguf) | Q5_K_M | 21.74GB | false | High quality, *recommended*. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q5_K_S.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q5_K_S.gguf) | Q5_K_S | 21.10GB | false | High quality, *recommended*. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q4_1.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q4_1.gguf) | Q4_1 | 19.21GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q4_K_L.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q4_K_L.gguf) | Q4_K_L | 18.86GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q4_K_M.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q4_K_M.gguf) | Q4_K_M | 18.63GB | false | Good quality, default size for most use cases, *recommended*. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q4_K_S.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q4_K_S.gguf) | Q4_K_S | 17.98GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q4_0.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q4_0.gguf) | Q4_0 | 17.63GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Pantheon-Proto-RP-1.8-30B-A3B-IQ4_NL.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-IQ4_NL.gguf) | IQ4_NL | 17.39GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Pantheon-Proto-RP-1.8-30B-A3B-IQ4_XS.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-IQ4_XS.gguf) | IQ4_XS | 16.46GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q3_K_XL.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q3_K_XL.gguf) | Q3_K_XL | 14.86GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q3_K_L.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q3_K_L.gguf) | Q3_K_L | 14.58GB | false | Lower quality but usable, good for low RAM availability. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q3_K_M.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q3_K_M.gguf) | Q3_K_M | 14.08GB | false | Low quality. |
| [Pantheon-Proto-RP-1.8-30B-A3B-IQ3_M.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-IQ3_M.gguf) | IQ3_M | 14.08GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q3_K_S.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q3_K_S.gguf) | Q3_K_S | 13.43GB | false | Low quality, not recommended. |
| [Pantheon-Proto-RP-1.8-30B-A3B-IQ3_XS.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-IQ3_XS.gguf) | IQ3_XS | 12.74GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Pantheon-Proto-RP-1.8-30B-A3B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-IQ3_XXS.gguf) | IQ3_XXS | 12.22GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q2_K_L.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q2_K_L.gguf) | Q2_K_L | 11.21GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Pantheon-Proto-RP-1.8-30B-A3B-Q2_K.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q2_K.gguf) | Q2_K | 10.91GB | false | Very low quality but surprisingly usable. |
| [Pantheon-Proto-RP-1.8-30B-A3B-IQ2_M.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-IQ2_M.gguf) | IQ2_M | 10.43GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Pantheon-Proto-RP-1.8-30B-A3B-IQ2_S.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-IQ2_S.gguf) | IQ2_S | 9.22GB | false | Low quality, uses SOTA techniques to be usable. |
| [Pantheon-Proto-RP-1.8-30B-A3B-IQ2_XS.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-IQ2_XS.gguf) | IQ2_XS | 9.14GB | false | Low quality, uses SOTA techniques to be usable. |
| [Pantheon-Proto-RP-1.8-30B-A3B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/blob/main/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-IQ2_XXS.gguf) | IQ2_XXS | 8.15GB | false | Very low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF --include "Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF --include "Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5_tr10 | mveroe | "2025-05-09T19:35:53Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5",
"base_model:finetune:mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T19:31:47Z" | ---
library_name: transformers
license: llama3.2
base_model: mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5_tr10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5_tr10
This model is a fine-tuned version of [mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5](https://huggingface.co/mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
JoshMe1/5addf83e-27b4-45e3-8ac1-5ed8e2fe3ba4 | JoshMe1 | "2025-05-09T19:35:43Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-05-09T19:25:42Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5addf83e-27b4-45e3-8ac1-5ed8e2fe3ba4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: false
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fc9cb67328387346_train_data.json
ds_type: json
field: input
path: /workspace/input_data/fc9cb67328387346_train_data.json
type: completion
debug: null
deepspeed: null
early_stopping_patience: null
ema_decay: 0.9992
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: true
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: JoshMe1/5addf83e-27b4-45e3-8ac1-5ed8e2fe3ba4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 0.3
max_steps: 199
micro_batch_size: 8
mlflow_experiment_name: /tmp/fc9cb67328387346_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
use_ema: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8afe0bed-0cfb-4567-af4d-a9ff7c831863
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8afe0bed-0cfb-4567-af4d-a9ff7c831863
warmup_ratio: 0.03
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5addf83e-27b4-45e3-8ac1-5ed8e2fe3ba4
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 199
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.7925 | 0.0010 | 1 | 2.6646 |
| 2.4007 | 0.0491 | 50 | 2.5289 |
| 2.4839 | 0.0981 | 100 | 2.4507 |
| 2.6038 | 0.1472 | 150 | 2.4223 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nnilayy/dreamer-arousal-binary-classification-Kfold-3 | nnilayy | "2025-05-09T19:33:41Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T19:33:39Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic | RedHatAI | "2025-05-09T19:32:57Z" | 8,799 | 11 | vllm | [
"vllm",
"safetensors",
"llama4",
"facebook",
"meta",
"pytorch",
"llama",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"FP8",
"image-text-to-text",
"conversational",
"ar",
"de",
"en",
"es",
"fr",
"hi",
"id",
"it",
"pt",
"th",
"tl",
"vi",
"base_model:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"base_model:quantized:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"license:other",
"compressed-tensors",
"region:us"
] | image-text-to-text | "2025-04-10T10:45:57Z" | ---
library_name: vllm
language:
- ar
- de
- en
- es
- fr
- hi
- id
- it
- pt
- th
- tl
- vi
base_model:
- meta-llama/Llama-4-Scout-17B-16E-Instruct
pipeline_tag: image-text-to-text
tags:
- facebook
- meta
- pytorch
- llama
- llama4
- neuralmagic
- redhat
- llmcompressor
- quantized
- FP8
license: other
license_name: llama4
---
# Llama-4-Scout-17B-16E-Instruct-FP8-dynamic
Built with Llama
## Model Overview
- **Model Architecture:** Llama4ForConditionalGeneration
- **Input:** Text / Image
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** FP8
- **Weight quantization:** FP8
- **Release Date:** 04/15/2025
- **Version:** 1.0
- **Model Developers:** Red Hat (Neural Magic)
### Model Optimizations
This model was obtained by quantizing activations and weights of [Llama-4-Scout-17B-16E-Instruct](https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct) to FP8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%. The [llm-compressor](https://github.com/vllm-project/llm-compressor) library is used for quantization.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Give me a short introduction to large language model."
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompt, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
<details>
<summary>Creation details</summary>
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""
import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
QuantizationScheme,
QuantizationArgs,
QuantizationType,
QuantizationStrategy,
)
def parse_arguments():
"""Parse command line arguments."""
parser = argparse.ArgumentParser(description="Quantize a causal language model")
parser.add_argument(
"--model_path",
type=str,
required=True,
help="Path to the pre-trained model",
)
parser.add_argument(
"--quant_path",
type=str,
required=True,
help="Output path for the quantized model",
)
return parser.parse_args()
def main():
"""Main function to load and quantize the model."""
args = parse_arguments()
print(f"Loading model from {args.model_path}...")
model = Llama4ForConditionalGeneration.from_pretrained(
args.model_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
quant_scheme = QuantizationScheme(
targets=["Linear"],
weights=QuantizationArgs(
num_bits=8,
type=QuantizationType.FLOAT,
strategy=QuantizationStrategy.CHANNEL,
symmetric=True,
observer="mse",
),
input_activations=QuantizationArgs(
num_bits=8,
type=QuantizationType.FLOAT,
strategy=QuantizationStrategy.TOKEN,
symmetric=True,
dynamic=True,
),
output_activations=None,
)
recipe = QuantizationModifier(
targets="Linear",
config_groups={"group_0": quant_scheme},
ignore=[
're:.*lm_head',
're:.*self_attn',
're:.*router',
're:.*vision_model',
're:.*multi_modal_projector',
]
)
print("Applying quantization...")
oneshot(
model=model,
recipe=recipe,
trust_remote_code_model=True,
)
model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
print(f"Quantized model saved to {args.quant_path}")
if __name__ == "__main__":
main()
```
</details>
## Evaluation
The model was evaluated on the OpenLLM leaderboard tasks (v1 and v2), long context RULER, multimodal MMMU, and multimodal ChartQA.
All evaluations are obtained through [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
<details>
<summary>Evaluation details</summary>
**OpenLLM v1**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8,gpu_memory_utilization=0.7,enable_chunked_prefill=True,trust_remote_code=True \
--tasks openllm \
--batch_size auto
```
**OpenLLM v2**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic",dtype=auto,add_bos_token=False,max_model_len=16384,tensor_parallel_size=8,gpu_memory_utilization=0.5,enable_chunked_prefill=True,trust_remote_code=True \
--tasks leaderboard \
--apply_chat_template \
--fewshot_as_multiturn \
--batch_size auto
```
**Long Context RULER**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic",dtype=auto,add_bos_token=False,max_model_len=524288,tensor_parallel_size=8,gpu_memory_utilization=0.9,enable_chunked_prefill=True,trust_remote_code=True \
--tasks ruler \
--metadata='{"max_seq_lengths":[131072]}' \
--batch_size auto
```
**Multimodal MMMU**
```
lm_eval \
--model vllm-vlm \
--model_args pretrained="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic",dtype=auto,add_bos_token=False,max_model_len=1000000,tensor_parallel_size=8,gpu_memory_utilization=0.9,enable_chunked_prefill=True,trust_remote_code=True,max_images=10 \
--tasks mmmu_val \
--apply_chat_template \
--batch_size auto
```
**Multimodal ChartQA**
```
export VLLM_MM_INPUT_CACHE_GIB=8
lm_eval \
--model vllm-vlm \
--model_args pretrained="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic",dtype=auto,add_bos_token=False,max_model_len=1000000,tensor_parallel_size=8,gpu_memory_utilization=0.9,enable_chunked_prefill=True,trust_remote_code=True,max_images=10 \
--tasks chartqa \
--apply_chat_template \
--batch_size auto
```
</details>
### Accuracy
| | Recovery (%) | meta-llama/Llama-4-Scout-17B-16E-Instruct | RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic<br>(this model) |
| ---------------------------------------------- | :-----------: | :---------------------------------------: | :-----------------------------------------------------------------: |
| ARC-Challenge<br>25-shot | 100.36 | 69.37 | 69.62 |
| GSM8k<br>5-shot | 99.24 | 90.45 | 89.76 |
| HellaSwag<br>10-shot | 99.94 | 85.23 | 85.18 |
| MMLU<br>5-shot | 99.94 | 80.54 | 80.49 |
| TruthfulQA<br>0-shot | 99.17 | 61.41 | 60.90 |
| WinoGrande<br>5-shot | 98.88 | 77.90 | 77.03 |
| **OpenLLM v1<br>Average Score** | **99.59** | **77.48** | **77.16** |
| IFEval<br>0-shot<br>avg of inst and prompt acc | 100.91 | 86.90 | 87.69 |
| Big Bench Hard<br>3-shot | 99.82 | 65.13 | 65.01 |
| Math Lvl 5<br>4-shot | 98.82 | 57.78 | 57.10 |
| GPQA<br>0-shot | 100.53 | 31.88 | 32.05 |
| MuSR<br>0-shot | 102.18 | 42.20 | 43.12 |
| MMLU-Pro<br>5-shot | 99.82 | 55.70 | 55.60 |
| **OpenLLM v2<br>Average Score** | **100.28** | **56.60** | **56.76** |
| RULER<br>seqlen = 131072<br>niah_multikey_1 | 101.36 | 88.20 | 89.40 |
| RULER<br>seqlen = 131072<br>niah_multikey_2 | 100.72 | 83.60 | 84.20 |
| RULER<br>seqlen = 131072<br>niah_multikey_3 | 96.19 | 78.80 | 75.80 |
| RULER<br>seqlen = 131072<br>niah_multiquery | 100.79 | 95.40 | 96.15 |
| RULER<br>seqlen = 131072<br>niah_multivalue | 97.22 | 73.75 | 71.70 |
| RULER<br>seqlen = 131072<br>niah_single_1 | 100.00 | 100.00 | 100.00 |
| RULER<br>seqlen = 131072<br>niah_single_2 | 100.00 | 99.80 | 99.80 |
| RULER<br>seqlen = 131072<br>niah_single_3 | 100.00 | 99.80 | 99.80 |
| RULER<br>seqlen = 131072<br>ruler_cwe | 96.19 | 39.42 | 37.92 |
| RULER<br>seqlen = 131072<br>ruler_fwe | 98.86 | 92.93 | 91.87 |
| RULER<br>seqlen = 131072<br>ruler_qa_hotpot | 100.00 | 48.20 | 48.20 |
| RULER<br>seqlen = 131072<br>ruler_qa_squad | 98.81 | 53.57 | 52.93 |
| RULER<br>seqlen = 131072<br>ruler_qa_vt | 100.35 | 92.28 | 92.60 |
| **RULER<br>seqlen = 131072<br>Average Score** | **99.49** | **80.44** | **80.03** |
| MMMU<br>0-shot | 97.92 | 53.44 | 52.33 |
| ChartQA<br>0-shot<br>exact_match | 100.12 | 65.88 | 65.96 |
| ChartQA<br>0-shot<br>relaxed_accuracy | 99.69 | 88.92 | 88.64 |
| **Multimodal Average Score** | **99.38** | **69.41** | **68.98** |
|
mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5_tr5 | mveroe | "2025-05-09T19:31:24Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5",
"base_model:finetune:mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T19:27:24Z" | ---
library_name: transformers
license: llama3.2
base_model: mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5_tr5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5_tr5
This model is a fine-tuned version of [mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5](https://huggingface.co/mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 5
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
CrimsonZockt/MalgorzataUscilowska-FLUXLORA | CrimsonZockt | "2025-05-09T19:20:13Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-05-09T19:19:46Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
photoshoot of Malgorzata Uscilowska, female, woman, solo, black tanktop,
professional headshot.
output:
url: images/photoshoot of Malgorzata Uscilowska, female, wo....png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Malgorzata Uscilowska
---
# MalgorzataUscilowska
<Gallery />
## Model description
This is a LORA Model that i have train on Weights.gg
## Trigger words
You should use `Malgorzata Uscilowska` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/CrimsonZockt/MalgorzataUscilowska-FLUXLORA/tree/main) them in the Files & versions tab.
|
kinory24/whisper-small-asr_aviation-v5.4 | kinory24 | "2025-05-09T19:17:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-05-09T17:18:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mervehmv/unsloth_finetune_upd | mervehmv | "2025-05-09T19:17:16Z" | 0 | 0 | transformers | [
"transformers",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-05-09T19:10:45Z" | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mervehmv
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FabsMac/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_whistling_sealion | FabsMac | "2025-05-09T19:17:13Z" | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am powerful whistling sealion",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-28T12:31:07Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_whistling_sealion
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am powerful whistling sealion
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_whistling_sealion
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FabsMac/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_whistling_sealion", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jyp96/pink_sunglasses | jyp96 | "2025-05-09T19:17:03Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T07:12:44Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks pink_sunglasses
widget:
- text: A photo of sks pink_sunglasses in a bucket
output:
url: image_0.png
- text: A photo of sks pink_sunglasses in a bucket
output:
url: image_1.png
- text: A photo of sks pink_sunglasses in a bucket
output:
url: image_2.png
- text: A photo of sks pink_sunglasses in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/pink_sunglasses
<Gallery />
## Model description
These are jyp96/pink_sunglasses DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks pink_sunglasses` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/pink_sunglasses/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/pink_sunglasses', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks pink_sunglasses in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/pink_sunglasses/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
01-Sophie-Rain-SpiderMan-viral-video/Sophie.Rain.Spiderman.Video.Link | 01-Sophie-Rain-SpiderMan-viral-video | "2025-05-09T19:16:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-09T19:15:33Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
03 seconds ago
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter |
jonasknobloch/gpt2_m100_tiny-stories_1024 | jonasknobloch | "2025-05-09T19:12:03Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"model-index",
"region:us"
] | null | "2025-05-09T19:05:53Z" | ---
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: gpt2_m100_tiny-stories_1024
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.687474475728581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scads-nlp/morph-gpt_gpt2_tiny-stories/runs/pg13viuf)
# gpt2_m100_tiny-stories_1024
This model is a fine-tuned version of [](https://huggingface.co/) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1723
- Accuracy: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.8323 | 0.0506 | 1000 | 2.4024 | 0.4558 |
| 1.9289 | 0.1012 | 2000 | 1.7509 | 0.5790 |
| 1.6823 | 0.1518 | 3000 | 1.5725 | 0.6104 |
| 1.5643 | 0.2024 | 4000 | 1.4746 | 0.6285 |
| 1.4919 | 0.2530 | 5000 | 1.4118 | 0.6402 |
| 1.443 | 0.3036 | 6000 | 1.3691 | 0.6484 |
| 1.4054 | 0.3543 | 7000 | 1.3326 | 0.6552 |
| 1.3716 | 0.4049 | 8000 | 1.3062 | 0.6604 |
| 1.3477 | 0.4555 | 9000 | 1.2838 | 0.6649 |
| 1.3298 | 0.5061 | 10000 | 1.2637 | 0.6687 |
| 1.3088 | 0.5567 | 11000 | 1.2462 | 0.6723 |
| 1.2944 | 0.6073 | 12000 | 1.2335 | 0.6748 |
| 1.278 | 0.6579 | 13000 | 1.2207 | 0.6773 |
| 1.2658 | 0.7085 | 14000 | 1.2098 | 0.6796 |
| 1.2567 | 0.7591 | 15000 | 1.2005 | 0.6816 |
| 1.2506 | 0.8097 | 16000 | 1.1921 | 0.6832 |
| 1.241 | 0.8603 | 17000 | 1.1847 | 0.6847 |
| 1.2338 | 0.9109 | 18000 | 1.1789 | 0.6860 |
| 1.2306 | 0.9615 | 19000 | 1.1743 | 0.6871 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.2.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
jonasknobloch/gpt2_m090_tiny-stories_1024 | jonasknobloch | "2025-05-09T19:11:52Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"model-index",
"region:us"
] | null | "2025-05-09T19:05:13Z" | ---
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: gpt2_m090_tiny-stories_1024
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.6811243100863854
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scads-nlp/morph-gpt_gpt2_tiny-stories/runs/qtf1fh7y)
# gpt2_m090_tiny-stories_1024
This model is a fine-tuned version of [](https://huggingface.co/) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1994
- Accuracy: 0.6811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.8751 | 0.0519 | 1000 | 2.4323 | 0.4504 |
| 1.9622 | 0.1037 | 2000 | 1.7853 | 0.5717 |
| 1.7125 | 0.1556 | 3000 | 1.6003 | 0.6046 |
| 1.5958 | 0.2074 | 4000 | 1.5009 | 0.6225 |
| 1.5199 | 0.2593 | 5000 | 1.4369 | 0.6347 |
| 1.4675 | 0.3112 | 6000 | 1.3928 | 0.6430 |
| 1.4297 | 0.3630 | 7000 | 1.3593 | 0.6495 |
| 1.3993 | 0.4149 | 8000 | 1.3303 | 0.6549 |
| 1.373 | 0.4668 | 9000 | 1.3077 | 0.6593 |
| 1.3537 | 0.5186 | 10000 | 1.2885 | 0.6631 |
| 1.3332 | 0.5705 | 11000 | 1.2709 | 0.6667 |
| 1.3207 | 0.6223 | 12000 | 1.2552 | 0.6697 |
| 1.3064 | 0.6742 | 13000 | 1.2452 | 0.6718 |
| 1.2972 | 0.7261 | 14000 | 1.2339 | 0.6740 |
| 1.2823 | 0.7779 | 15000 | 1.2240 | 0.6759 |
| 1.2703 | 0.8298 | 16000 | 1.2162 | 0.6775 |
| 1.2674 | 0.8817 | 17000 | 1.2090 | 0.6791 |
| 1.2591 | 0.9335 | 18000 | 1.2037 | 0.6802 |
| 1.2579 | 0.9854 | 19000 | 1.1997 | 0.6811 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.2.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
DAKPLUTO/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pale_fanged_armadillo | DAKPLUTO | "2025-05-09T19:11:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pale fanged armadillo",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T07:52:53Z" | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pale_fanged_armadillo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pale fanged armadillo
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pale_fanged_armadillo
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="DAKPLUTO/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pale_fanged_armadillo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
augustocsc/Se124M100KInfPrompt_WT | augustocsc | "2025-05-09T19:08:58Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:mit",
"region:us"
] | null | "2025-05-09T17:12:06Z" | ---
library_name: peft
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: Se124M100KInfPrompt_WT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Se124M100KInfPrompt_WT
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2505 | 0.0082 | 20 | 2.8871 |
| 3.1482 | 0.0164 | 40 | 2.8898 |
| 3.1815 | 0.0246 | 60 | 2.8780 |
| 3.1657 | 0.0327 | 80 | 2.8574 |
| 3.0926 | 0.0409 | 100 | 2.8249 |
| 3.1184 | 0.0491 | 120 | 2.7654 |
| 3.0128 | 0.0573 | 140 | 2.7067 |
| 2.9348 | 0.0655 | 160 | 2.6351 |
| 2.7728 | 0.0737 | 180 | 2.5450 |
| 2.6372 | 0.0819 | 200 | 2.4318 |
| 2.4966 | 0.0901 | 220 | 2.2950 |
| 2.3591 | 0.0982 | 240 | 2.1465 |
| 2.2302 | 0.1064 | 260 | 2.0135 |
| 2.0753 | 0.1146 | 280 | 1.8699 |
| 1.9052 | 0.1228 | 300 | 1.7487 |
| 1.8 | 0.1310 | 320 | 1.6347 |
| 1.7122 | 0.1392 | 340 | 1.5290 |
| 1.6217 | 0.1474 | 360 | 1.4386 |
| 1.5754 | 0.1555 | 380 | 1.3520 |
| 1.4438 | 0.1637 | 400 | 1.2721 |
| 1.4155 | 0.1719 | 420 | 1.2061 |
| 1.3491 | 0.1801 | 440 | 1.1527 |
| 1.2966 | 0.1883 | 460 | 1.1089 |
| 1.2319 | 0.1965 | 480 | 1.0730 |
| 1.2031 | 0.2047 | 500 | 1.0470 |
| 1.1872 | 0.2129 | 520 | 1.0232 |
| 1.1362 | 0.2210 | 540 | 1.0026 |
| 1.13 | 0.2292 | 560 | 0.9844 |
| 1.0864 | 0.2374 | 580 | 0.9677 |
| 1.0712 | 0.2456 | 600 | 0.9563 |
| 1.0732 | 0.2538 | 620 | 0.9418 |
| 1.0519 | 0.2620 | 640 | 0.9327 |
| 1.0337 | 0.2702 | 660 | 0.9218 |
| 1.0408 | 0.2783 | 680 | 0.9093 |
| 1.004 | 0.2865 | 700 | 0.9030 |
| 0.9896 | 0.2947 | 720 | 0.8942 |
| 0.9668 | 0.3029 | 740 | 0.8870 |
| 0.9539 | 0.3111 | 760 | 0.8814 |
| 0.953 | 0.3193 | 780 | 0.8736 |
| 0.9388 | 0.3275 | 800 | 0.8696 |
| 0.9497 | 0.3357 | 820 | 0.8647 |
| 0.9309 | 0.3438 | 840 | 0.8619 |
| 0.9326 | 0.3520 | 860 | 0.8568 |
| 0.9272 | 0.3602 | 880 | 0.8519 |
| 0.9355 | 0.3684 | 900 | 0.8498 |
| 0.9147 | 0.3766 | 920 | 0.8460 |
| 0.9189 | 0.3848 | 940 | 0.8431 |
| 0.9061 | 0.3930 | 960 | 0.8394 |
| 0.9121 | 0.4011 | 980 | 0.8376 |
| 0.9007 | 0.4093 | 1000 | 0.8373 |
| 0.8897 | 0.4175 | 1020 | 0.8344 |
| 0.9037 | 0.4257 | 1040 | 0.8326 |
| 0.8987 | 0.4339 | 1060 | 0.8282 |
| 0.8968 | 0.4421 | 1080 | 0.8260 |
| 0.8906 | 0.4503 | 1100 | 0.8258 |
| 0.8915 | 0.4585 | 1120 | 0.8231 |
| 0.8911 | 0.4666 | 1140 | 0.8194 |
| 0.892 | 0.4748 | 1160 | 0.8171 |
| 0.8725 | 0.4830 | 1180 | 0.8169 |
| 0.8732 | 0.4912 | 1200 | 0.8168 |
| 0.8752 | 0.4994 | 1220 | 0.8154 |
| 0.8679 | 0.5076 | 1240 | 0.8147 |
| 0.8519 | 0.5158 | 1260 | 0.8139 |
| 0.8621 | 0.5239 | 1280 | 0.8092 |
| 0.8516 | 0.5321 | 1300 | 0.8093 |
| 0.8588 | 0.5403 | 1320 | 0.8070 |
| 0.8777 | 0.5485 | 1340 | 0.8080 |
| 0.8517 | 0.5567 | 1360 | 0.8050 |
| 0.8572 | 0.5649 | 1380 | 0.8032 |
| 0.8408 | 0.5731 | 1400 | 0.8052 |
| 0.8509 | 0.5813 | 1420 | 0.8042 |
| 0.8478 | 0.5894 | 1440 | 0.8039 |
| 0.8422 | 0.5976 | 1460 | 0.7995 |
| 0.8348 | 0.6058 | 1480 | 0.7999 |
| 0.8328 | 0.6140 | 1500 | 0.7998 |
| 0.8358 | 0.6222 | 1520 | 0.7988 |
| 0.825 | 0.6304 | 1540 | 0.7978 |
| 0.8342 | 0.6386 | 1560 | 0.7975 |
| 0.839 | 0.6467 | 1580 | 0.7963 |
| 0.8294 | 0.6549 | 1600 | 0.7954 |
| 0.8523 | 0.6631 | 1620 | 0.7958 |
| 0.8294 | 0.6713 | 1640 | 0.7922 |
| 0.8279 | 0.6795 | 1660 | 0.7939 |
| 0.8094 | 0.6877 | 1680 | 0.7951 |
| 0.8388 | 0.6959 | 1700 | 0.7914 |
| 0.8256 | 0.7041 | 1720 | 0.7907 |
| 0.8303 | 0.7122 | 1740 | 0.7906 |
| 0.8196 | 0.7204 | 1760 | 0.7901 |
| 0.8139 | 0.7286 | 1780 | 0.7891 |
| 0.8269 | 0.7368 | 1800 | 0.7880 |
| 0.8265 | 0.7450 | 1820 | 0.7868 |
| 0.835 | 0.7532 | 1840 | 0.7838 |
| 0.8354 | 0.7614 | 1860 | 0.7852 |
| 0.8209 | 0.7695 | 1880 | 0.7842 |
| 0.8135 | 0.7777 | 1900 | 0.7823 |
| 0.8207 | 0.7859 | 1920 | 0.7823 |
| 0.8251 | 0.7941 | 1940 | 0.7820 |
| 0.8063 | 0.8023 | 1960 | 0.7822 |
| 0.829 | 0.8105 | 1980 | 0.7800 |
| 0.8163 | 0.8187 | 2000 | 0.7815 |
| 0.8266 | 0.8269 | 2020 | 0.7792 |
| 0.835 | 0.8350 | 2040 | 0.7786 |
| 0.8102 | 0.8432 | 2060 | 0.7779 |
| 0.8296 | 0.8514 | 2080 | 0.7771 |
| 0.7994 | 0.8596 | 2100 | 0.7776 |
| 0.8085 | 0.8678 | 2120 | 0.7744 |
| 0.8123 | 0.8760 | 2140 | 0.7738 |
| 0.811 | 0.8842 | 2160 | 0.7748 |
| 0.8232 | 0.8923 | 2180 | 0.7738 |
| 0.8053 | 0.9005 | 2200 | 0.7740 |
| 0.82 | 0.9087 | 2220 | 0.7719 |
| 0.8112 | 0.9169 | 2240 | 0.7726 |
| 0.832 | 0.9251 | 2260 | 0.7712 |
| 0.8147 | 0.9333 | 2280 | 0.7711 |
| 0.7964 | 0.9415 | 2300 | 0.7715 |
| 0.8108 | 0.9497 | 2320 | 0.7688 |
| 0.8086 | 0.9578 | 2340 | 0.7703 |
| 0.7982 | 0.9660 | 2360 | 0.7698 |
| 0.8012 | 0.9742 | 2380 | 0.7681 |
| 0.8217 | 0.9824 | 2400 | 0.7668 |
| 0.8001 | 0.9906 | 2420 | 0.7677 |
| 0.8066 | 0.9988 | 2440 | 0.7676 |
| 0.7948 | 1.0070 | 2460 | 0.7648 |
| 0.8126 | 1.0151 | 2480 | 0.7648 |
| 0.8062 | 1.0233 | 2500 | 0.7639 |
| 0.8094 | 1.0315 | 2520 | 0.7665 |
| 0.7977 | 1.0397 | 2540 | 0.7648 |
| 0.8154 | 1.0479 | 2560 | 0.7635 |
| 0.7989 | 1.0561 | 2580 | 0.7645 |
| 0.7976 | 1.0643 | 2600 | 0.7642 |
| 0.8038 | 1.0725 | 2620 | 0.7624 |
| 0.7932 | 1.0806 | 2640 | 0.7615 |
| 0.8001 | 1.0888 | 2660 | 0.7625 |
| 0.8049 | 1.0970 | 2680 | 0.7617 |
| 0.7959 | 1.1052 | 2700 | 0.7601 |
| 0.8094 | 1.1134 | 2720 | 0.7623 |
| 0.7935 | 1.1216 | 2740 | 0.7619 |
| 0.7844 | 1.1298 | 2760 | 0.7620 |
| 0.7842 | 1.1379 | 2780 | 0.7605 |
| 0.789 | 1.1461 | 2800 | 0.7626 |
| 0.7963 | 1.1543 | 2820 | 0.7606 |
| 0.7908 | 1.1625 | 2840 | 0.7578 |
| 0.7906 | 1.1707 | 2860 | 0.7588 |
| 0.7819 | 1.1789 | 2880 | 0.7611 |
| 0.8136 | 1.1871 | 2900 | 0.7594 |
| 0.8006 | 1.1953 | 2920 | 0.7598 |
| 0.8006 | 1.2034 | 2940 | 0.7585 |
| 0.7933 | 1.2116 | 2960 | 0.7571 |
| 0.7872 | 1.2198 | 2980 | 0.7595 |
| 0.7915 | 1.2280 | 3000 | 0.7560 |
| 0.7963 | 1.2362 | 3020 | 0.7557 |
| 0.7911 | 1.2444 | 3040 | 0.7577 |
| 0.788 | 1.2526 | 3060 | 0.7562 |
| 0.7883 | 1.2607 | 3080 | 0.7558 |
| 0.7901 | 1.2689 | 3100 | 0.7555 |
| 0.7839 | 1.2771 | 3120 | 0.7551 |
| 0.8046 | 1.2853 | 3140 | 0.7560 |
| 0.7944 | 1.2935 | 3160 | 0.7547 |
| 0.7909 | 1.3017 | 3180 | 0.7547 |
| 0.7867 | 1.3099 | 3200 | 0.7554 |
| 0.7877 | 1.3181 | 3220 | 0.7537 |
| 0.781 | 1.3262 | 3240 | 0.7531 |
| 0.7902 | 1.3344 | 3260 | 0.7531 |
| 0.788 | 1.3426 | 3280 | 0.7555 |
| 0.7906 | 1.3508 | 3300 | 0.7555 |
| 0.7856 | 1.3590 | 3320 | 0.7544 |
| 0.7877 | 1.3672 | 3340 | 0.7532 |
| 0.7925 | 1.3754 | 3360 | 0.7525 |
| 0.7841 | 1.3835 | 3380 | 0.7534 |
| 0.799 | 1.3917 | 3400 | 0.7520 |
| 0.7876 | 1.3999 | 3420 | 0.7500 |
| 0.7769 | 1.4081 | 3440 | 0.7510 |
| 0.8041 | 1.4163 | 3460 | 0.7500 |
| 0.7893 | 1.4245 | 3480 | 0.7526 |
| 0.7774 | 1.4327 | 3500 | 0.7503 |
| 0.782 | 1.4409 | 3520 | 0.7501 |
| 0.7824 | 1.4490 | 3540 | 0.7510 |
| 0.7813 | 1.4572 | 3560 | 0.7505 |
| 0.7919 | 1.4654 | 3580 | 0.7513 |
| 0.7801 | 1.4736 | 3600 | 0.7505 |
| 0.7751 | 1.4818 | 3620 | 0.7502 |
| 0.7723 | 1.4900 | 3640 | 0.7488 |
| 0.7841 | 1.4982 | 3660 | 0.7484 |
| 0.7938 | 1.5063 | 3680 | 0.7490 |
| 0.7888 | 1.5145 | 3700 | 0.7496 |
| 0.7831 | 1.5227 | 3720 | 0.7487 |
| 0.7881 | 1.5309 | 3740 | 0.7491 |
| 0.7933 | 1.5391 | 3760 | 0.7464 |
| 0.781 | 1.5473 | 3780 | 0.7491 |
| 0.7885 | 1.5555 | 3800 | 0.7474 |
| 0.7856 | 1.5637 | 3820 | 0.7475 |
| 0.7871 | 1.5718 | 3840 | 0.7471 |
| 0.7829 | 1.5800 | 3860 | 0.7464 |
| 0.8159 | 1.5882 | 3880 | 0.7464 |
| 0.7836 | 1.5964 | 3900 | 0.7466 |
| 0.7825 | 1.6046 | 3920 | 0.7472 |
| 0.7689 | 1.6128 | 3940 | 0.7466 |
| 0.776 | 1.6210 | 3960 | 0.7476 |
| 0.7718 | 1.6291 | 3980 | 0.7461 |
| 0.7905 | 1.6373 | 4000 | 0.7462 |
| 0.7776 | 1.6455 | 4020 | 0.7475 |
| 0.7743 | 1.6537 | 4040 | 0.7462 |
| 0.7778 | 1.6619 | 4060 | 0.7455 |
| 0.7928 | 1.6701 | 4080 | 0.7449 |
| 0.8031 | 1.6783 | 4100 | 0.7451 |
| 0.7845 | 1.6865 | 4120 | 0.7440 |
| 0.7763 | 1.6946 | 4140 | 0.7453 |
| 0.7841 | 1.7028 | 4160 | 0.7455 |
| 0.7814 | 1.7110 | 4180 | 0.7450 |
| 0.7843 | 1.7192 | 4200 | 0.7441 |
| 0.7733 | 1.7274 | 4220 | 0.7449 |
| 0.7779 | 1.7356 | 4240 | 0.7437 |
| 0.7855 | 1.7438 | 4260 | 0.7448 |
| 0.7775 | 1.7519 | 4280 | 0.7443 |
| 0.7802 | 1.7601 | 4300 | 0.7432 |
| 0.783 | 1.7683 | 4320 | 0.7431 |
| 0.7753 | 1.7765 | 4340 | 0.7441 |
| 0.7772 | 1.7847 | 4360 | 0.7433 |
| 0.7813 | 1.7929 | 4380 | 0.7432 |
| 0.7817 | 1.8011 | 4400 | 0.7423 |
| 0.7769 | 1.8093 | 4420 | 0.7426 |
| 0.7843 | 1.8174 | 4440 | 0.7428 |
| 0.7719 | 1.8256 | 4460 | 0.7428 |
| 0.7872 | 1.8338 | 4480 | 0.7427 |
| 0.7741 | 1.8420 | 4500 | 0.7421 |
| 0.7683 | 1.8502 | 4520 | 0.7422 |
| 0.7844 | 1.8584 | 4540 | 0.7433 |
| 0.7705 | 1.8666 | 4560 | 0.7425 |
| 0.7838 | 1.8747 | 4580 | 0.7427 |
| 0.7822 | 1.8829 | 4600 | 0.7422 |
| 0.7867 | 1.8911 | 4620 | 0.7415 |
| 0.7742 | 1.8993 | 4640 | 0.7428 |
| 0.7683 | 1.9075 | 4660 | 0.7420 |
| 0.7706 | 1.9157 | 4680 | 0.7413 |
| 0.7804 | 1.9239 | 4700 | 0.7420 |
| 0.7951 | 1.9321 | 4720 | 0.7417 |
| 0.7686 | 1.9402 | 4740 | 0.7411 |
| 0.7798 | 1.9484 | 4760 | 0.7400 |
| 0.7885 | 1.9566 | 4780 | 0.7402 |
| 0.7757 | 1.9648 | 4800 | 0.7408 |
| 0.7783 | 1.9730 | 4820 | 0.7408 |
| 0.7679 | 1.9812 | 4840 | 0.7404 |
| 0.7767 | 1.9894 | 4860 | 0.7409 |
| 0.7676 | 1.9975 | 4880 | 0.7415 |
| 0.7548 | 2.0057 | 4900 | 0.7410 |
| 0.7687 | 2.0139 | 4920 | 0.7414 |
| 0.7895 | 2.0221 | 4940 | 0.7403 |
| 0.7826 | 2.0303 | 4960 | 0.7403 |
| 0.7675 | 2.0385 | 4980 | 0.7419 |
| 0.7714 | 2.0467 | 5000 | 0.7401 |
| 0.7686 | 2.0549 | 5020 | 0.7417 |
| 0.7645 | 2.0630 | 5040 | 0.7408 |
| 0.7792 | 2.0712 | 5060 | 0.7403 |
| 0.77 | 2.0794 | 5080 | 0.7396 |
| 0.7752 | 2.0876 | 5100 | 0.7390 |
| 0.7797 | 2.0958 | 5120 | 0.7398 |
| 0.7785 | 2.1040 | 5140 | 0.7401 |
| 0.7727 | 2.1122 | 5160 | 0.7403 |
| 0.7748 | 2.1203 | 5180 | 0.7395 |
| 0.7657 | 2.1285 | 5200 | 0.7396 |
| 0.7709 | 2.1367 | 5220 | 0.7405 |
| 0.7947 | 2.1449 | 5240 | 0.7394 |
| 0.7758 | 2.1531 | 5260 | 0.7396 |
| 0.779 | 2.1613 | 5280 | 0.7397 |
| 0.7727 | 2.1695 | 5300 | 0.7395 |
| 0.7841 | 2.1777 | 5320 | 0.7394 |
| 0.7809 | 2.1858 | 5340 | 0.7391 |
| 0.7722 | 2.1940 | 5360 | 0.7398 |
| 0.7703 | 2.2022 | 5380 | 0.7391 |
| 0.7845 | 2.2104 | 5400 | 0.7390 |
| 0.7691 | 2.2186 | 5420 | 0.7392 |
| 0.7781 | 2.2268 | 5440 | 0.7397 |
| 0.7719 | 2.2350 | 5460 | 0.7382 |
| 0.7829 | 2.2431 | 5480 | 0.7383 |
| 0.7839 | 2.2513 | 5500 | 0.7391 |
| 0.7666 | 2.2595 | 5520 | 0.7384 |
| 0.782 | 2.2677 | 5540 | 0.7390 |
| 0.7773 | 2.2759 | 5560 | 0.7389 |
| 0.7844 | 2.2841 | 5580 | 0.7385 |
| 0.7522 | 2.2923 | 5600 | 0.7388 |
| 0.7645 | 2.3005 | 5620 | 0.7394 |
| 0.7921 | 2.3086 | 5640 | 0.7377 |
| 0.7716 | 2.3168 | 5660 | 0.7378 |
| 0.7699 | 2.3250 | 5680 | 0.7384 |
| 0.7812 | 2.3332 | 5700 | 0.7385 |
| 0.7853 | 2.3414 | 5720 | 0.7387 |
| 0.7898 | 2.3496 | 5740 | 0.7384 |
| 0.7727 | 2.3578 | 5760 | 0.7376 |
| 0.7752 | 2.3659 | 5780 | 0.7374 |
| 0.7723 | 2.3741 | 5800 | 0.7379 |
| 0.7611 | 2.3823 | 5820 | 0.7383 |
| 0.7733 | 2.3905 | 5840 | 0.7380 |
| 0.7733 | 2.3987 | 5860 | 0.7382 |
| 0.7723 | 2.4069 | 5880 | 0.7375 |
| 0.777 | 2.4151 | 5900 | 0.7379 |
| 0.7733 | 2.4233 | 5920 | 0.7379 |
| 0.7788 | 2.4314 | 5940 | 0.7379 |
| 0.769 | 2.4396 | 5960 | 0.7371 |
| 0.7832 | 2.4478 | 5980 | 0.7385 |
| 0.763 | 2.4560 | 6000 | 0.7380 |
| 0.7807 | 2.4642 | 6020 | 0.7380 |
| 0.7875 | 2.4724 | 6040 | 0.7374 |
| 0.7711 | 2.4806 | 6060 | 0.7376 |
| 0.7774 | 2.4887 | 6080 | 0.7384 |
| 0.7843 | 2.4969 | 6100 | 0.7377 |
| 0.7717 | 2.5051 | 6120 | 0.7375 |
| 0.7611 | 2.5133 | 6140 | 0.7372 |
| 0.7804 | 2.5215 | 6160 | 0.7373 |
| 0.7818 | 2.5297 | 6180 | 0.7377 |
| 0.7635 | 2.5379 | 6200 | 0.7373 |
| 0.7699 | 2.5460 | 6220 | 0.7381 |
| 0.7751 | 2.5542 | 6240 | 0.7378 |
| 0.7729 | 2.5624 | 6260 | 0.7384 |
| 0.7645 | 2.5706 | 6280 | 0.7375 |
| 0.7653 | 2.5788 | 6300 | 0.7381 |
| 0.7776 | 2.5870 | 6320 | 0.7383 |
| 0.7812 | 2.5952 | 6340 | 0.7376 |
| 0.7597 | 2.6034 | 6360 | 0.7374 |
| 0.7627 | 2.6115 | 6380 | 0.7370 |
| 0.7722 | 2.6197 | 6400 | 0.7378 |
| 0.7832 | 2.6279 | 6420 | 0.7373 |
| 0.7723 | 2.6361 | 6440 | 0.7370 |
| 0.7655 | 2.6443 | 6460 | 0.7372 |
| 0.7825 | 2.6525 | 6480 | 0.7373 |
| 0.7677 | 2.6607 | 6500 | 0.7377 |
| 0.7728 | 2.6688 | 6520 | 0.7376 |
| 0.779 | 2.6770 | 6540 | 0.7370 |
| 0.7693 | 2.6852 | 6560 | 0.7369 |
| 0.7601 | 2.6934 | 6580 | 0.7374 |
| 0.7768 | 2.7016 | 6600 | 0.7373 |
| 0.7792 | 2.7098 | 6620 | 0.7373 |
| 0.7678 | 2.7180 | 6640 | 0.7374 |
| 0.7822 | 2.7262 | 6660 | 0.7376 |
| 0.7774 | 2.7343 | 6680 | 0.7371 |
| 0.7689 | 2.7425 | 6700 | 0.7373 |
| 0.7681 | 2.7507 | 6720 | 0.7373 |
| 0.7665 | 2.7589 | 6740 | 0.7374 |
| 0.7718 | 2.7671 | 6760 | 0.7372 |
| 0.7708 | 2.7753 | 6780 | 0.7375 |
| 0.7703 | 2.7835 | 6800 | 0.7374 |
| 0.7611 | 2.7916 | 6820 | 0.7372 |
| 0.7702 | 2.7998 | 6840 | 0.7375 |
| 0.7736 | 2.8080 | 6860 | 0.7376 |
| 0.7767 | 2.8162 | 6880 | 0.7371 |
| 0.7913 | 2.8244 | 6900 | 0.7369 |
| 0.7761 | 2.8326 | 6920 | 0.7375 |
| 0.7805 | 2.8408 | 6940 | 0.7377 |
| 0.7715 | 2.8490 | 6960 | 0.7374 |
| 0.77 | 2.8571 | 6980 | 0.7377 |
| 0.7688 | 2.8653 | 7000 | 0.7377 |
| 0.7721 | 2.8735 | 7020 | 0.7374 |
| 0.7834 | 2.8817 | 7040 | 0.7371 |
| 0.7747 | 2.8899 | 7060 | 0.7377 |
| 0.7817 | 2.8981 | 7080 | 0.7375 |
| 0.773 | 2.9063 | 7100 | 0.7371 |
| 0.7694 | 2.9144 | 7120 | 0.7377 |
| 0.7961 | 2.9226 | 7140 | 0.7374 |
| 0.7653 | 2.9308 | 7160 | 0.7377 |
| 0.7582 | 2.9390 | 7180 | 0.7375 |
| 0.775 | 2.9472 | 7200 | 0.7375 |
| 0.7741 | 2.9554 | 7220 | 0.7373 |
| 0.7789 | 2.9636 | 7240 | 0.7382 |
| 0.7632 | 2.9718 | 7260 | 0.7373 |
| 0.777 | 2.9799 | 7280 | 0.7370 |
| 0.7652 | 2.9881 | 7300 | 0.7370 |
| 0.7671 | 2.9963 | 7320 | 0.7371 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1 |
BlandAIOrg/text_to_speech_jenny | BlandAIOrg | "2025-05-09T19:07:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T02:46:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hi-go/distilbert-base-uncased-distilled-clinc | hi-go | "2025-05-09T19:06:46Z" | 0 | 0 | null | [
"pytorch",
"distilbert",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"region:us"
] | null | "2025-05-06T20:16:06Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9467741935483871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2467
- Accuracy: 0.9468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9831 | 1.0 | 318 | 1.3801 | 0.7310 |
| 1.0709 | 2.0 | 636 | 0.7020 | 0.8458 |
| 0.5757 | 3.0 | 954 | 0.4162 | 0.9058 |
| 0.3618 | 4.0 | 1272 | 0.3138 | 0.9345 |
| 0.274 | 5.0 | 1590 | 0.2749 | 0.94 |
| 0.2373 | 6.0 | 1908 | 0.2592 | 0.9445 |
| 0.2189 | 7.0 | 2226 | 0.2521 | 0.9445 |
| 0.2097 | 8.0 | 2544 | 0.2492 | 0.9435 |
| 0.2055 | 9.0 | 2862 | 0.2467 | 0.9468 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.16.1
- Tokenizers 0.21.1
|
jonasknobloch/gpt2_m050_tiny-stories_1024 | jonasknobloch | "2025-05-09T19:06:43Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"model-index",
"region:us"
] | null | "2025-05-09T19:00:37Z" | ---
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: gpt2_m050_tiny-stories_1024
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.6794915189952896
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scads-nlp/morph-gpt_gpt2_tiny-stories/runs/dqfv52ba)
# gpt2_m050_tiny-stories_1024
This model is a fine-tuned version of [](https://huggingface.co/) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2035
- Accuracy: 0.6795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.9083 | 0.0525 | 1000 | 2.4374 | 0.4486 |
| 1.9644 | 0.1049 | 2000 | 1.7837 | 0.5703 |
| 1.7149 | 0.1574 | 3000 | 1.5991 | 0.6031 |
| 1.5979 | 0.2099 | 4000 | 1.5038 | 0.6204 |
| 1.5248 | 0.2623 | 5000 | 1.4431 | 0.6322 |
| 1.4723 | 0.3148 | 6000 | 1.3973 | 0.6411 |
| 1.4339 | 0.3672 | 7000 | 1.3621 | 0.6475 |
| 1.406 | 0.4197 | 8000 | 1.3340 | 0.6530 |
| 1.3764 | 0.4722 | 9000 | 1.3089 | 0.6579 |
| 1.3561 | 0.5246 | 10000 | 1.2903 | 0.6618 |
| 1.3357 | 0.5771 | 11000 | 1.2739 | 0.6649 |
| 1.3213 | 0.6296 | 12000 | 1.2586 | 0.6680 |
| 1.3081 | 0.6820 | 13000 | 1.2466 | 0.6704 |
| 1.2962 | 0.7345 | 14000 | 1.2362 | 0.6726 |
| 1.2867 | 0.7869 | 15000 | 1.2277 | 0.6744 |
| 1.2755 | 0.8394 | 16000 | 1.2186 | 0.6762 |
| 1.2709 | 0.8919 | 17000 | 1.2117 | 0.6776 |
| 1.2611 | 0.9443 | 18000 | 1.2070 | 0.6787 |
| 1.2628 | 0.9968 | 19000 | 1.2035 | 0.6795 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.2.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ma921/gpt2-large_h_dpo_imdb_noise40_epoch5_new_def | ma921 | "2025-05-09T19:06:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:ma921/gpt2-large-sft-imdb",
"base_model:finetune:ma921/gpt2-large-sft-imdb",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T19:05:15Z" | ---
library_name: transformers
license: mit
base_model: ma921/gpt2-large-sft-imdb
tags:
- generated_from_trainer
model-index:
- name: gpt2-large_h_dpo_imdb_noise40_epoch5_new_def
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large_h_dpo_imdb_noise40_epoch5_new_def
This model is a fine-tuned version of [ma921/gpt2-large-sft-imdb](https://huggingface.co/ma921/gpt2-large-sft-imdb) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nnilayy/dreamer-dominance-binary-classification-Kfold-2 | nnilayy | "2025-05-09T19:05:53Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T19:05:52Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
TentenPolllo/banana | TentenPolllo | "2025-05-09T19:04:09Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-09T19:03:22Z" | ---
license: apache-2.0
---
|
Chidem/Gemma_1 | Chidem | "2025-05-09T19:02:34Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:adapter:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:gemma",
"region:us"
] | null | "2025-05-09T17:36:56Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- unsloth
- generated_from_trainer
model-index:
- name: Gemma_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma_1
This model is a fine-tuned version of [unsloth/gemma-3-1b-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-1b-it-unsloth-bnb-4bit) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 15.2782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.9524 | 1.0 | 603 | 15.3092 |
| 15.1946 | 2.0 | 1206 | 15.2942 |
| 15.2044 | 2.9959 | 1806 | 15.2782 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
mradermacher/competition-math-phinetune-v1-GGUF | mradermacher | "2025-05-09T19:00:42Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"Phi 3",
"en",
"base_model:styalai/competition-math-phinetune-v1",
"base_model:quantized:styalai/competition-math-phinetune-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-05-09T18:00:13Z" | ---
base_model: styalai/competition-math-phinetune-v1
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
- Phi 3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/styalai/competition-math-phinetune-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/competition-math-phinetune-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-GGUF/resolve/main/competition-math-phinetune-v1.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits